Skip to main content

Exact Inference of a Simple Step-Stress Model with Hybrid Type-II Stress Changing Time

Abstract

In this article, we consider a simple step-stress model for exponentially distributed lifetime units. As failure rate is lower at the initial stress level, therefore, it is important to pay more attention to the stress changing time. Here, we consider a simple step-stress model where the stress level changes either after a prefixed time or after a prefixed number of failures, whichever occurs later. It ensures a prefixed minimum number of failures at the first stress level and also sets up a control on the expected experimental time. We have obtained the maximum likelihood estimators of the model parameters along with their exact distributions. The monotonicity properties of the maximum likelihood estimators have been established here, and it can be used to construct the exact confidence intervals of the unknown parameters. We provide approximate and bias-corrected accelerated bootstrap confidence intervals of the model parameters. We also define an optimality criteria and based on that obtain an optimal stress changing time for a given sample size. Finally, an extensive simulation study has been performed to asses the performance of the proposed methods and provide the analyses of two data sets for illustrative purpose.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2

References

  1. Alhadeed AA, Yang S (2005) Optimal simple step-stress plan for cumulative exposure model using log-normal distribution. IEEE Trans Reliab 54:64–68

    Article  Google Scholar 

  2. Bagdonavicius VB, Nikulin M (2002) Accelerated Life Models: Modeling and Statistical Analysis. Chapman and Hall CRC Press, Boca Raton

    MATH  Google Scholar 

  3. Bai DS, Kim MS, Lee SH (1989) Optimum simple step-stress accelerated life test with censoring. IEEE Trans Reliab 38:528–532

    Article  Google Scholar 

  4. Balakrishnan N (2009) A synthesis of exact inferential results for exponential step-stress models and associated optimal accelerated life-tests. Metrika 69:351–396

    MathSciNet  Article  Google Scholar 

  5. Balakrishnan N, Han D (2008) Optimal step-stress testing for progressively type-I censored data from exponential distribution. J Stat Plan Inference 139:1782–1798

    MathSciNet  Article  Google Scholar 

  6. Balakrishnan N, Iliopoulos G (2009) Stochastic monotonicity of the MLE of exponential mean under different censoring schemes. Ann Inst Stat Math 61:753–772

    MathSciNet  Article  Google Scholar 

  7. Balakrishnan N, Iliopoulos G (2010) Stochastic monotonicity of the MLEs of parameters in exponential simple step-stress models under type-I and type-II censoring. Metrika 72:89–109

    MathSciNet  Article  Google Scholar 

  8. Balakrishnan N, Xie Q (2007a) Exact inference for a simple step-stress model with type-I hybrid censored data from the exponential distribution. J Stat Plan Inference 137:3268–3290

    MathSciNet  Article  Google Scholar 

  9. Balakrishnan N, Xie Q (2007b) Exact inference for a simple step-stress model with type-II hybrid censored data from the exponential distribution. J Stat Plan Inference 137:2543–2563

    MathSciNet  Article  Google Scholar 

  10. Bhattacharyya GK, Soejoeti Z (1989) A tampered failure rate model for step-stress accelerated life test. Commun Stat Theory Methods 18:1627–1643

    MathSciNet  Article  Google Scholar 

  11. Chen SM, Bhattacharya GK (1988) Exact confidence bound for an exponential parameter under hybrid censoring. Commun Stat Theory Methods 17:1857–1870

    MathSciNet  Article  Google Scholar 

  12. Childs A, Chandrasekar B, Balakrishnan N, Kundu D (2003) Exact likelihood inference based on type-I and type-II hybrid censored samples from the exponential distribution. Ann Inst Stat Math 55:319–330

    MathSciNet  MATH  Google Scholar 

  13. Ganguly A, Kundu D (2016) Analysis of simple step stress model in presence of competing risk. J Stat Comput Simul 86:1989–2006

    MathSciNet  Article  Google Scholar 

  14. Han D, Kundu D (2015) Inference for a step-stress model with competing risks for failure from the generalized exponential distribution under type-I censoring. IEEE Trans Reliab 64:31–43

    Article  Google Scholar 

  15. Ismail AA (2016) Statistical inference for a step stress partially accelerated life test model with an adoptive type-I progressively hybrid censored data from Weibull distribution. Stat Papers 57:271–301

    Article  Google Scholar 

  16. Kundu D, Balakrishnan N (2009) Point and interval estimation for a simple step-stress model with random stress-change time. J Probab Stat Sci 7:113–126

    MathSciNet  Google Scholar 

  17. Kundu D, Basu S (2000) Analysis of competing risk models in presence of incomplete data. J Stat Plan Inference 87:221–239

    Article  Google Scholar 

  18. Kundu D, Ganguly A (2017) Analysis of Step-Stress Models: Existing Methods and Recent Developments. Elsevier, Amsterdam

    MATH  Google Scholar 

  19. Mitra S, Ganguly A, Samanta D, Kundu D (2013) On simple step-stress model for two-parameter exponential distribution. Stat Methodol 15:95–114

    MathSciNet  Article  Google Scholar 

  20. Nelson WB (1980) Accelerated life testing: step-stress models and data analysis. IEEE Trans Reliab 29:103–108

    Article  Google Scholar 

  21. Samanta D, Ganguly A, Kundu D, Mitra S (2017) Order restricted Bayesian inference for exponential simple step-stress model. Commun Stat Simul Comput 46:1113–1135

    MathSciNet  Article  Google Scholar 

  22. Samanta D, Kundu D, Ganguly A (2017) Order restricted Bayesian analysis of a simple step stress model. Sankhya Ser B. https://doi.org/10.1007/s13571-017-0139-9

    Article  MATH  Google Scholar 

  23. Sedyakin NM (1966) On one physical principle in reliability theory. Tech Cybern 3:80–87

    Google Scholar 

  24. Xiong C, Milliken GA (1999) Step-stress life testing with random stress changing times for exponential data. IEEE Trans Reliab 48:141–148

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the reviewers for their constructive comments which helped to improve the manuscript significantly. Part of the work of the last author has been supported by a Grant from SERB MTR/2018/000179.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Debasis Kundu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Theorem 1

Consider the conditional CDF of \({\widehat{\theta }}_1\) conditioning on the event \(\hbox {A}=\{n_1 \le n-1\}\)

$$\begin{aligned}&P({\widehat{\theta }}_1\le x \vert n_1\in A) \\&\quad = \frac{1}{P(n_1 \in A)} P({\widehat{\theta }}_1\le x, n_1\in A) \\&\quad = \frac{1}{P(n_1 \in A)} \left[ \sum _{d=0}^{r-1} P({\widehat{\theta }}_1\le x, n_1\in A,D=d) + \sum _{d=r}^{n-1} P({\widehat{\theta }}_1\le x, n_1\in A,D=d)\right] \\&\quad = \frac{1}{P(n_1 \in A)} \left[ \sum _{d=0}^{r-1} P({\widehat{\theta }}_1\le x \vert D=d) P(D=d) + \sum _{d=r}^{n-1} P({\widehat{\theta }}_1\le x \vert D=d) P(D=d) \right] \end{aligned}$$

For \(d=0,1,\ldots r-1\), the conditional MGF of \({\widehat{\theta }}_1\) conditioned on the event \(\{D=d\}\) is derived as follows.

$$\begin{aligned} M_1(\omega )&= E[\hbox {e}^{\omega {\widehat{\theta }}_1}\vert D=d] \\&= E[\hbox {e}^{\frac{\omega }{r}\left\{ \sum _{i=1}^{r-1}t_{i:n} +(n-r+1)t_{r:n}\right\} } \vert D=d] \\&= \frac{n!}{P(D=d)(n-r)!\theta _1^r}\\&\quad \int _{\tau }^{\infty }\int _{\tau }^{t_{r:n}}\cdots \int _{\tau }^{t_{d+2}}\int _{0}^{\tau } \int _{0}^{t_{d:n}} \cdots \int _{0}^{t_{2:n}} \hbox {e}^{-\left( \frac{1}{\theta _1}-\frac{\omega }{r}\right) \left[ \sum _{i=1}^{r-1}t_{i:n} +(n-r+1)t_{r:n}\right] } \hbox {d}t_{1:n}\cdots \hbox {d}t_{d:n} \hbox {d}t_{d+1:n} \cdots \hbox {d}t_{r:n} \\&= \frac{n!\left( \frac{1}{\theta _1}-\frac{\omega }{r}\right) ^{-d}\left( 1-\hbox {e}^{-\left( \frac{1}{\theta _1}-\frac{\omega }{r}\right) \tau }\right) ^d \hbox {e}^{-\left( \frac{1}{\theta _1}-\frac{\omega }{r}\right) (r-d-1)\tau }}{P(D=d)(n-r)!d!\theta _1^r} \\&\quad \int _{\tau }^{\infty } \hbox {e}^{-\left( \frac{1}{\theta _1}-\frac{\omega }{r}\right) (n-r+1)t_{r:n}} \int _{\tau }^{t_{r:n}}\cdots \int _{\tau }^{t_{d+2}} \hbox {e}^{-\left( \frac{1}{\theta _1}-\frac{\omega }{r}\right) \sum _{i=d+1}^{r-1}(t_{i:n}-\tau )} \hbox {d}t_{d+1:n} \cdots \hbox {d}t_{r:n} \\&= \frac{n!\left( \frac{1}{\theta _1}-\frac{\omega }{r}\right) ^{-(r-1)} \left( 1-\hbox {e}^{-\left( \frac{1}{\theta _1}-\frac{\omega }{r}\right) \tau }\right) ^d \hbox {e}^{-\left( \frac{1}{\theta _1}-\frac{\omega }{r}\right) (r-d-1)\tau }}{P(D=d)(n-r)!d!(r-d-1)!\theta _1^r}\\&\quad \int _{\tau }^{\infty } \hbox {e}^{-\left( \frac{1}{\theta _1}-\frac{\omega }{r}\right) (n-r+1) t_{r:n}}\left[ 1-\hbox {e}^{-\left( \frac{1}{\theta _1}-\frac{\omega }{r}\right) (t_{r:n}-\tau )}\right] ^{r-d-1} \hbox {d}t_{r:n} \\&= \frac{1}{P(D=d)}{n\atopwithdelims ()d} \left( 1-\frac{\omega \theta _1}{r}\right) ^{-r} \hbox {e}^{-\left( \frac{1}{\theta _1} -\frac{\omega }{r}\right) (n-d)\tau }\left[ 1-\hbox {e}^{\left( \frac{1}{\theta _1} -\frac{\omega }{r}\right) \tau }\right] ^d \\&= \frac{1}{P(D=d)}\sum _{j=0}^{d}{n\atopwithdelims ()d} {d\atopwithdelims ()j} (-1)^j \hbox {e}^{-\frac{\tau }{\theta _1}(n-d+j)} \left( 1-\frac{\omega \theta _1}{r}\right) ^{-r} \end{aligned}$$

Again for \(d=r,r+1,\ldots ,n-1\), the conditional MGF of \({\widehat{\theta }}_1\) conditioned on the event \(\{D=d\}\) is derived as follows.

$$\begin{aligned} M_2(\omega )&= \frac{n!}{P(D=d)(n-d)!\theta _1^d} \int _{0}^{\tau }\int _{0}^{t_{d:n}} \cdots \int _{0}^{t_{2:n}} \hbox {e}^{-\left( \frac{1}{\theta _1} -\frac{\omega }{d}\right) \left[ \sum _{i=1}^{d} t_{i:n}+(n-d)\tau \right] } \hbox {d}t_{1:n}\cdots \hbox {d}t_{d:n} \\&= \frac{n!\hbox {e}^{-(\frac{1}{\theta _1}- \frac{\omega }{d})(n-d)\tau }}{P(D=d)(n-d)!d!\theta _1^d} \left[ 1-\hbox {e}^{-(\frac{1}{\theta _1}-\frac{\omega }{d})\tau }\right] ^d \left( \frac{1}{\theta _1}-\frac{\omega }{d}\right) ^{-d} \\&= \frac{1}{P(D=d)} \sum _{j=0}^{d} {n\atopwithdelims ()d} {d\atopwithdelims ()j} (-1)^j \hbox {e}^{-(\frac{1}{\theta _1}-\frac{\omega }{d})(n-d+j)\tau } \left( 1-\frac{\omega \theta _1}{d}\right) ^{-d} \end{aligned}$$

Therefore, using the uniqueness property of MGF, the CDF of \({\widehat{\theta }}_1\) is obtained as

$$\begin{aligned} P(\widehat{\theta _1}\le x \vert n_1\in A)&=\frac{1}{p(\theta _{1})} \left[ \sum _{d=0}^{r-1}\sum _{j=0}^{d} c_{dj}(\theta _{1}) \Gamma \left( x-\mu _{dj},\frac{r}{\theta _{1}},r\right) \right. \nonumber \\&\quad \left. +\, \sum _{d=r}^{n-1}\sum _{j=0}^{d} c_{dj}(\theta _{1}) \Gamma \left( x-\mu _{dj}',\frac{d}{\theta _{1}},d\right) \right] . \end{aligned}$$
(11)

Proof of Theorem 2

Consider the conditional CDF of \({\widehat{\theta }}_2\) conditioning on the event A=\(\{ n_1 \le n-1\}\)

$$\begin{aligned}&P({\widehat{\theta }}_2\le x \vert n_1\in A) \\&\quad = \frac{1}{P(n_1 \in A)} P({\widehat{\theta }}_2\le x, n_1\in A) \\&\quad = \frac{1}{P(n_1 \in A)} \left[ \sum _{d=0}^{r-1} P({\widehat{\theta }}_2\le x, n_1\in A,D=d) + \sum _{d=r}^{n-1} P({\widehat{\theta }}_2\le x, n_1\in A,D=d)\right] \\&\quad = \frac{1}{P(n_1 \in A)} \left[ \sum _{d=0}^{r-1} P({\widehat{\theta }}_2\le x \vert D=d) P(D=d) + \sum _{d=r}^{n-1} P({\widehat{\theta }}_2\le x \vert D=d) P(D=d) \right] . \end{aligned}$$

Now, consider the conditional MGF of \({\widehat{\theta }}_2\) for \(d=0,1,\ldots r-1\),

$$\begin{aligned} M_3(\omega )&= E[\hbox {e}^{\omega {\widehat{\theta }}_2}\vert D=d] \\&= \frac{n!\left( 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right) ^d \hbox {e}^{-\frac{\tau (r-d-1)}{\theta _1}}}{P(D=d) d! (r-d-1)! \theta _1 \theta _2^{n-r}} \int _{\tau }^\infty \left[ 1-\hbox {e}^{-\frac{t_{r:n}-\tau }{\theta _1}}\right] ^{r-d-1} \hbox {e}^{-\frac{(n-r+1)t_{r:n}}{\theta _1}} \hbox {d}t_{r:n} \\&\quad \int _{t_{r:n}}^{\infty } \int _{t_{r:n}}^{t_{n:n}} \cdots \int _{t_{r:n}}^{t_{r+2}} \hbox {e}^{-\left( \frac{1}{\theta _2}-\frac{\omega }{n-r}\right) \sum _{i=r+1}^{n}}(t_{i:n}-t_{r:n}) \hbox {d}t_{r+1:n}\cdots \hbox {d}t_{n:n} \\&= \frac{n!(1-\hbox {e}^{-\frac{\tau }{\theta _1}})^d \hbox {e}^{-\frac{\tau (r-d-1)}{\theta _1}}\left( \frac{1}{\theta _2} -\frac{\omega }{n-r}\right) ^{-(n-r)}}{P(D=d) d! (r-d-1)! (n-r)! \theta _1 \theta _2^{n-r}} \int _{\tau }^\infty \left[ 1-\hbox {e}^{-\frac{t_{r:n}-\tau }{\theta _1}}\right] ^{r-d-1} \hbox {e}^{-\frac{(n-r+1)t_{r:n}}{\theta _1}} \hbox {d}t_{r:n} \\&= \frac{n!\left( 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right) ^d \hbox {e}^{-\frac{\tau (r-d-1)}{\theta _1}}\left( \frac{1}{\theta _2} -\frac{\omega }{n-r}\right) ^{-(n-r)}\hbox {e}^{-\frac{\tau (n-r+1)}{\theta _1}}B(n-r+1,r-d)}{P(D=d) d! (r-d-1)! (n-r)! \theta _2^{n-r}} \\&= \frac{1}{P(D=d)} {n\atopwithdelims ()d} \left( 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right) ^d \hbox {e}^{-\frac{\tau (n-d)}{\theta _1}} \left( 1-\frac{\omega \theta _2}{n-r}\right) ^{-(n-r)}. \end{aligned}$$

Similarly, the conditional MGF of \({\widehat{\theta }}_2\) for \(d=r,r+1,\ldots ,n-1\) is derived as follows.

$$\begin{aligned} M_3(\omega )&= E[\hbox {e}^{\omega {\widehat{\theta }}_2}\vert D=d] \\&= E[\hbox {e}^{\frac{\omega }{n-d} \sum _{i=d+1}^{n} (t_{i:n}-\tau )}\vert D=d] \\&= \frac{n!\left( 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right) ^d \hbox {e}^{-\frac{\tau (n-d)}{\theta _1}}}{P(D=d)d!\theta _2^{(n-d)}} \int _{\tau }^{\infty } \cdots \int _{\tau }^{t_{d+3:n}}\\&\quad \int _{\tau }^{t_{d+2:n}} \hbox {e}^{-\left( \frac{1}{\theta _2}-\frac{\omega }{n-d}\right) \sum _{i=d+1}^{n}(t_{i:n}-\tau )} \hbox {d}t_{d+1:n} \cdots \hbox {d}t_{n:n} \\&= \frac{1}{P(D=d)} {n\atopwithdelims ()d} \left( 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right) ^d \hbox {e}^{-\frac{\tau (n-d)}{\theta _1}} \left( 1-\frac{\omega \theta _2}{n-d}\right) ^{-(n-d)}. \end{aligned}$$

Therefore. using the uniqueness property of MGF, the CDF of \({\widehat{\theta }}_2\) is obtained as

$$\begin{aligned} P({\widehat{\theta }}_2\le x \vert n_1\in A)&=\frac{1}{p(\theta _1)} \left[ \sum _{d=0}^{r-1} c_{d}(\theta _{1}) \Gamma \left( x,\frac{n-r}{\theta _{2}},n-r\right) \right. \nonumber \\&\quad\left. +\, \sum _{d=r}^{n-1} c_{d}(\theta _{1}) \Gamma \left( x,\frac{n-d}{\theta _{2}},n-d\right) \right] . \end{aligned}$$
(12)

Proof of Lemma 1

To establish that \(P_{\theta _1}({\widehat{\theta }}_1 \le x | n_1 \in A)\) is a decreasing function of \(\theta _1\), we use the three monotonic lemmas as given in Balakrishnan and Iliopoulos [6, 7]. In our case, we investigate whether each of the three lemmas holds true or not. For \(x>0\), the distribution function of \({\widehat{\theta }}_1\) can be written as,

$$\begin{aligned} P_{\theta _1}({\widehat{\theta }}_1 \le x | n_1 \in A)&= \sum _{d=0}^{n-1} P [{\widehat{\theta }}_1 \le x |D=d, n_1 \in A] P(D=d| n_1 \in A) \nonumber \\&= \sum _{d=0}^{r-1} P [{\widehat{\theta }}_1 \le x |D=d, n_1 \in A] P(D=d| n_1 \in A) \nonumber \\&\quad +\, \sum _{d=r}^{n-1} P [{\widehat{\theta }}_1 \le x |D=d, n_1 \in A] P(D=d| n_1 \in A). \end{aligned}$$
(13)

Note that for \(d=0,1,\ldots , r-1\), the event \(\{D=d\} \subset \{n_1 \in A\}\) and for \(d=r,r+1,\ldots , n-1\), the event \(\{D=d\} = \{n_1 \in A\}\). Thus, Eq. (13) becomes

$$\begin{aligned} P_{\theta _1}({\widehat{\theta }}_1 \le x | n_1 \in A)&= \frac{1}{P(n_1 \in A)}\left[ \sum _{d=0}^{r-1} P [{\widehat{\theta }}_1 \le x |D=d] P(D=d) \right. \nonumber \\&\quad \left. +\, \sum _{d=r}^{n-1} P [{\widehat{\theta }}_1 \le x |D=d] P(D=d)\right] \end{aligned}$$
(14)

or equivalently,

$$\begin{aligned} P_{\theta _1}({\widehat{\theta }}_1> x | n_1 \in A)&= \frac{1}{P(n_1 \in A)}\left[ \sum _{d=0}^{r-1} P [{\widehat{\theta }}_1> x |D=d] P(D=d) \right. \nonumber \\&\quad \left. +\, \sum _{d=r}^{n-1} P [{\widehat{\theta }}_1 > x |D=d] P(D=d)\right] . \end{aligned}$$
(15)

This is the same representation as given in Balakrishnan and Iliopoulos [6, 7]. Note that \(P(n_1 \in A)= 1-(1-\hbox {e}^{-\frac{T}{\theta }})^n\) which is increasing function of \(\theta _1,\) and hence \(\frac{1}{P(n_1 \in A)}\) is decreasing function of \(\theta _1\).

Lemma (M-1)

Case 1 \(d \in \{0,1,\ldots ,r-1\}\)

$$\begin{aligned} {\widehat{\theta }}_1=\frac{\sum _{i=1}^{r-1} X_{i:n}+(n-r+1)X_{r:n}}{r}. \end{aligned}$$

Clearly, \(\sum _{i=1}^{r-1} X_{i:n}+(n-r+1)X_{r:n}=\sum _{i=1}^{d} X_{i:n}+\sum _{i=d+1}^{r-1} X_{i:n}+(n-r+1)X_{r:n}\). Before we proceed further, it is evident that given \(d \in \{0,1,\ldots , r-1\}\), the random variables \(\{X_{i:n}, \ldots , X_{d:n}\} {\mathop {=}\limits ^{{\mathrm{d}}}}\{U_{i:d}, \ldots , U_{d:d}\},\) where \(U_1, \ldots , U_d\) are right truncated at time point T, iid exponential random variable with mean \(\theta _1\) and the random variables \(\{X_{d+1:n}, \ldots , X_{r:n}\}{\mathop {=}\limits ^{{\mathrm{d}}}}\{V_{1:n-d}, \ldots , V_{r-d:n-d}\},\) where \(V_1, \ldots , V_{n-d}\) are left truncated at time point T, iid exponential random variable with mean \(\theta _1\). Hence, \(\sum _{i=1}^{d} X_{i:n}+\sum _{i=d+1}^{r-1} X_{i:n}+(n-r+1)X_{r:n}{\mathop {=}\limits ^{{\mathrm{d}}}}\sum _{i=1}^{d} U_{i:d}+\sum _{i=1}^{r-d-1} V_{i:n-d}+(n-r+1)V_{r-d:n-d}\). Since both left- and right-truncated exponential random variables are stochastically increasing in \(\theta _1\), their sum is also stochastically increasing in \(\theta _1,\) and hence for \(d\in \{0,1,\ldots , r-1\}\), conditional distribution of \({\hat{\theta }}_1\), given \(D=d,\) is stochastically increasing in \(\theta _1\).

Case 2 \(d \in \{r,r+1,\ldots ,n-1\}\)

$$\begin{aligned} {\widehat{\theta }}_1=\frac{\sum _{i=1}^{d} X_{i:n}+(n-d)T}{d}. \end{aligned}$$

As before, it is noted that given \(d \in \{r,r+1,\ldots ,n-1\}\), \(\{X_{1:n},\ldots ,X_{d:n}\}{\mathop {=}\limits ^{{\mathrm{d}}}}\{U_{1:d},\ldots ,U_{d:d}\}\) with the same random variable U as defined above. Thus, \(\sum _{i=1}^{d} X_{i:n}+(n-d)T{\mathop {=}\limits ^{{\mathrm{d}}}}\sum _{i=1}^{d} U_{i:d}+(n-d)T\) and this is stochastically increasing in \(\theta _1\). Hence, for \(d\in \{r,r+1,\ldots , n-1\}\), conditional distribution of \({\hat{\theta }}_1\), given \(D=d,\) is stochastically increasing in \(\theta _1\).

Lemma (M-2)

Case 1 \(d \in \{0,1,\ldots ,r-1\}\)

Note that for \(d \in \{0,1,\ldots ,r-2\}\) it is clear that \(({\widehat{\theta }}_1|D=d)-({\widehat{\theta }}_1|D=d+1)= 0\). When \(d=r-1\), then \(({\widehat{\theta }}_1|D=r-1)-({\widehat{\theta }}_1|D=r){\mathop {=}\limits ^{{\mathrm{d}}}}\frac{\sum _{i=1}^{r-1} X_{i:n}+(n-r+1)X_{r:n}}{r} -\frac{\sum _{i=1}^{r} X_{i:n}+(n-r)T}{r}\) \({\mathop {=}\limits ^{{\mathrm{d}}}}\) \(\frac{\sum _{i=1}^{r-1} U_{i:r-1}+(n-r+1)V_{1:n-r+1}}{r}-\frac{\sum _{i=1}^{r} U_{i:r}+(n-d)T}{r}\), where \(U_{i:r-1}\) and \(V_{1:n-r+1}\) are same as defined in Lemma (M-1). Clearly for \(i=1,2,\ldots , r-1\), \(U_{i:r-1} \ge U_{i:r}\) and \(U_{r:r} \le T \le V_{1:n-r+1}\). Thus, \(({\widehat{\theta }}_1|D=r-1)-({\widehat{\theta }}_1|D=r) \ge 0,\) and hence for \(d\in \{0,1,\ldots , r-1\}\), conditional distribution of \({\widehat{\theta }}_1\), given \(D=d,\) is stochastically decreasing in d.

Case 2 \(d \in \{r,r+1,\ldots ,n-1\}\)

\(({\widehat{\theta }}_1|D=d)-({\widehat{\theta }}_1|D=d+1){\mathop {=}\limits ^{{\mathrm{d}}}}\frac{\sum _{i=1}^{d} X_{i:n}+(n-d)T}{d}-\frac{\sum _{i=1}^{d+1} X_{i:n}+(n-d-1)T}{d+1} {\mathop {=}\limits ^{{\mathrm{d}}}}\frac{\sum _{i=1}^{d} U_{i:n}+(n-d)T}{d}-\frac{\sum _{i=1}^{d+1} U_{i:n}+(n-d-1)T}{d+1} \ge \frac{\sum _{i=1}^{d} U_{i:n}+(n-d)T}{d+1}- \frac{\sum _{i=1}^{d+1} U_{i:n}+(n-d-1)T}{d+1}=\frac{T-U_{d+1:n}}{d+1} \ge 0\). Thus, for \(d\in \{r,r+1,\ldots , n-1\}\), conditional distribution of \({\hat{\theta }}_1\), given \(D=d,\) is stochastically decreasing in d.

Lemma (M-3)

Note that D is a binomial random variable with parameters \(n,(1-{\hbox {exp}}(-\frac{T}{\theta _1})\). Let us consider \(\theta \le \theta ^{'}_1\). Then we have,

$$\begin{aligned} \frac{P_{\theta _1}(D=d)}{P_{\theta ^{'}_1}(D=d)} \propto \left[ \frac{\hbox {exp}\left( \frac{T}{\theta _1}\right) -1}{\hbox {exp} \left( \frac{T}{\theta ^{'}_1}\right) -1}\right] ^d. \end{aligned}$$

This is increasing in d. Thus, D has the monotone likelihood ratio property with respect to \(\theta _1\) and hence D is stochastically decreasing in \(\theta _1\). Thus all the three lemmas are established, and hence \(\sum \limits _{d=0}^{n-1} P[{\widehat{\theta }}_1>x|D=d]P(D=d)\) is increasing function of \(\theta _1\). Hence, \(\sum \limits _{d=0}^{n-1}P[{\widehat{\theta }}_1<x|D=d]P(D=d)\) is decreasing function of \(\theta _1\). Thus, using the fact that \(\frac{1}{P(n_1 \in A)}\) is decreasing function of \(\theta _1\), we have for \(\theta _1 \le \theta ^{'}_1\), \(P_{\theta _1}({\widehat{\theta }}_1\le x|n_1 \in A) \ge P_{\theta ^{'}_1}({\widehat{\theta }}_1\le x|n_1 \in A)\).

Proof of Lemma 2

Note that,

$$\begin{aligned} \Gamma \left( x,\frac{n-r}{\theta _{2}},n-r\right)&= \frac{\left( \frac{n-r}{\theta _2}\right) ^{n-r}}{\Gamma (n-r)} \int _{0}^{x} \hbox {e}^{-\frac{(n-r)t}{\theta _2}} t^{n-r-1} \hbox {d}t\\&= \frac{1}{\Gamma (n-r)} \int _{0}^{(n-r)x/\theta _2} \hbox {e}^{-u} u^{n-r-1} \hbox {d}u. \end{aligned}$$

Clearly, the above function is a decreasing function of \(\theta _2\). Similarly, for \(d = r, r+1, \ldots , n-1\), \(\Gamma (x,\frac{n-d}{\theta _{2}},n-d)\) is a decreasing function of \(\theta _2\). Hence, for any \(x>0\), \(P({\widehat{\theta }}_2 \le x| n_1 \in A )\) is a decreasing function of \(\theta _2\).

Proof of Lemma 3

Proceeding similarly as in “Appendix A.4,” it is immediate that \(\Gamma (x-\mu _{dj}, \frac{r}{\theta _1}, r) = \frac{1}{\Gamma r} \int \limits _{0}^{\frac{r}{\theta _1}} \hbox {e}^{-u} u^{r-1} \hbox {d}u\) \(\rightarrow 0\) as \(\theta \rightarrow \infty\). Similarly, \(\Gamma (x-\mu _{dj}', \frac{d}{\theta _1}, d) = \frac{1}{\Gamma d} \int \limits _{0}^{\frac{d}{\theta _1}} \hbox {e}^{-u} u^{d-1} \hbox {d}u\) \(\rightarrow 0\) as \(\theta \rightarrow \infty\). Also \(p(\theta _1) \rightarrow 1\) as \(\theta _1 \rightarrow \infty\). Hence, \(P({\widehat{\theta }}_1 \le x| n_1 \in A) \rightarrow 0\) as \(\theta _1 \rightarrow \infty\).

When \(\theta _1 \rightarrow 0\), we have the following observations,

  1. (i)

    \(\Gamma (x-\mu _{dj}, \frac{r}{\theta _1}, r) \rightarrow 1.\)

  2. (ii)

    \(\Gamma (x-\mu _{dj}', \frac{d}{\theta _1}, d) \rightarrow 1.\) Hence,

    $$\begin{aligned} \lim _{\theta _1 \rightarrow 0} P({\widehat{\theta }}_1 \le x| n_1 \in A)= & {} \lim _{\theta _1 \rightarrow 0}\frac{1}{p(\theta _1)}\left[ \sum _{d=0}^{r-1}\sum _{j=0}^{d} c_{dj}(\theta _{1}) + \sum _{d=r}^{n-1}\sum _{j=0}^{d} c_{dj}(\theta _{1}) \right] \\= & {} \lim _{\theta _1 \rightarrow 0} \frac{p(\theta _1)}{p(\theta _1)}=1. \end{aligned}$$

    Similar way leads to the establishment of the fact that

    $$\begin{aligned} \lim _{\theta _2 \rightarrow \infty } P({\widehat{\theta }}_2 \le x| n_1 \in A)=0 \end{aligned}$$

    and

    $$\begin{aligned} \lim _{\theta _2 \rightarrow 0} P({\widehat{\theta }}_2 \le x| n_1 \in A)= 1. \end{aligned}$$

Expected Experimental Time of an Exponential Simple SSLT Experiment

Here, we want to calculate \(E(T_{n:n})\), where \(t_{i:n}\) for \(i=1,2,\ldots ,n\) be the ith order observation coming from the experiment. Note that, for \(x>0\), the distribution function of \(T_{n:n}\) can be written as,

$$\begin{aligned} P(T_{n:n}\le x) = P(T_{n:n}\le x, \tau<T_{r:n}) + P(T_{n:n}\le x, T_{r:n}<\tau ) \end{aligned}$$
(16)

We calculate the two probabilities of the right-hand side of Eq. (16) separately. Note that, for \(x<\tau\),

$$\begin{aligned}&P(T_{n:n}\le x, \tau<T_{r:n}) =P(T_{n:n}\le x< \tau <T_{r:n}) =0&\end{aligned},$$
(17)

and

$$\begin{aligned} P(T_{n:n}\le x, T_{r:n}<\tau )&=P(T_{r:n}<T_{n:n}\le x<\tau )\nonumber \\&= \int _{0}^{x} \int _{0}^{t_n} \cdots \int _{0}^{t_2} n! \left( \frac{1}{\theta _1}\right) ^n \hbox {e}^{-\frac{1}{\theta _1}\sum _{i=1}^n t_i} \hbox {d}t_1\cdots \hbox {d}t_{n-1} \hbox {d}t_n~ \nonumber \\&=\left[ 1-\hbox {e}^{-\frac{x}{\theta _1}}\right] ^n. \end{aligned}$$
(18)

For \(x>\tau\),

$$\begin{aligned}&P(T_{n:n}\le x, \tau<T_{r:n})\nonumber \\&\quad =P(\tau<T_{r:n}<T_{n:n}\le x)\nonumber \\&\quad =n! \left( \frac{1}{\theta _1}\right) ^r\left( \frac{1}{\theta _2}\right) ^{n-r} \left[ \int _{0}^{t_r}\cdots \int _{0}^{t_2} \hbox {e}^{-\frac{1}{\theta _1}\sum _{i=1}^{r-1} t_i} \hbox {d}t_1\cdots \hbox {d}t_{r-1} \int _{t_{r}}^{x} \cdots \int _{t_{r}}^{t_{r+1}} \hbox {e}^{-\frac{1}{\theta _2} \sum _{i=r+1}^n (t_i-t_r)}\right. \nonumber \\&\qquad \left. \hbox {d}t_{r+1}\cdots \hbox {d}t_n \right] \hbox {e}^{-\frac{1}{\theta _1}(n-r+1)t_r} \hbox {d}t_r\nonumber \\&\quad = n! \left( \frac{1}{\theta _1}\right) \left( \frac{1}{\theta _2}\right) \frac{1}{(r-1)!(n-r)!} \left[ \left( \frac{1}{\theta _1}\right) ^{r-1} (r-1)! \int _{0}^{t_r}\cdots \int _{0}^{t_2} \hbox {e}^{-\frac{1}{\theta _1}\sum _{i=1}^{r-1} t_i} \hbox {d}t_1\cdots \hbox {d}t_{r-1}\right. \nonumber \\&\qquad \left. ~\int _{t_{r}}^{x} \cdots \int _{t_{r}}^{t_{r+1}} \left( \frac{1}{\theta _2}\right) ^{n-r} (n-r)! \hbox {e}^{-\frac{1}{\theta _2} \sum _{i=r+1}^n (t_i-t_r)} \hbox {d}t_{r+1}\cdots \hbox {d}t_n \right] \hbox {e}^{-\frac{1}{\theta _1}(n-r+1)t_r} dt_r\nonumber \\&\quad = r \left( {\begin{array}{c}n\\ r\end{array}}\right) \left( \frac{1}{\theta _1}\right) \left( \frac{1}{\theta _2}\right) \int _{\tau }^x \left[ 1-\hbox {e}^{-\frac{t_r}{\theta _1}}\right] ^{r-1} \left[ 1-\hbox {e}^{-\frac{1}{\theta _2}(x-t_r)}\right] ^{n-r} \hbox {d}t_r\nonumber \\&\quad =r \left( {\begin{array}{c}n\\ r\end{array}}\right) \left( \frac{1}{\theta _1}\right) \left( \frac{1}{\theta _2}\right) \sum _{i=0}^{r-1}\sum _{j=0}^{n-r} \hbox {e}^{-\frac{j}{\theta _2}x} \left( {\begin{array}{c}r-1\\ i\end{array}}\right) \left( {\begin{array}{c}n-r\\ j\end{array}}\right) (-1)^{i+j} \int _{\tau }^x \hbox {e}^{-\left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) t_r} \hbox {d}t_r\nonumber \\&\quad = r \left( {\begin{array}{c}n\\ r\end{array}}\right) \left( \frac{1}{\theta _1}\right) \left( \frac{1}{\theta _2}\right) \sum _{i=0}^{r-1}\sum _{j=0}^{n-r} \hbox {e}^{-\frac{j}{\theta _2}x} \left( {\begin{array}{c}r-1\\ i\end{array}}\right) \left( {\begin{array}{c}n-r\\ j\end{array}}\right) (-1)^{i+j} \left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) ^{-1}\nonumber \\&\qquad \left[ \hbox {e}^{-\left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) \tau } -\hbox {e}^{-\left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) x}\right] \end{aligned}$$
(19)
$$\begin{aligned}&P(T_{n:n}\le x, T_{r:n}<\tau )=P(T_{r:n}<\tau<T_{n:n}\le x)+P(T_{r:n}<T_{n:n}<\tau < x) \end{aligned}$$
(20)

Now,

$$\begin{aligned}&P(T_{r:n}<\tau<T_{n:n}\le x)\nonumber \\&\quad = \sum _{d=r}^{n-1} P(T_{r:n}<\tau <T_{n:n}\le x, D=d)\nonumber&\\&\quad = \sum _{d=r}^{n-1} n! \left( \frac{1}{\theta _1}\right) ^d \left( \frac{1}{\theta _2}\right) ^{n-d}\int _{0}^{\tau } \cdots \int _{0}^{t_2}\hbox {e}^{-\frac{1}{\theta _1} [\sum _{i=1}^d t_i+(n-d)\tau ]} \hbox {d}t_1 \cdots \hbox {d}t_d\times \nonumber \\&\qquad \int _{\tau }^{x} \cdots \int _{\tau }^{t_{d+2}} \hbox {e}^{-\frac{1}{\theta _2}\sum _{i=d+1}^n (t_i-\tau )} \hbox {d}t_{d+1} \cdots \hbox {d}t_n\nonumber \\&\quad = \sum _{d=r}^{n-1} n! \frac{1}{d!(n-d)!} d! \left( \frac{1}{\theta _1}\right) ^d \int _{0}^{\tau } \cdots \int _{0}^{t_2}\hbox {e}^{-\frac{1}{\theta _1} [\sum _{i=1}^d t_i+(n-d)\tau ]} \hbox {d}t_1 \cdots \hbox {d}t_d\times \nonumber \\&\qquad (n-d)! \left( \frac{1}{\theta _2}\right) ^{n-d} \int _{\tau }^{x} \cdots \int _{\tau }^{t_{d+2}} \hbox {e}^{-\frac{1}{\theta _2}\sum _{i=d+1}^n (t_i-\tau )} \hbox {d}t_{d+1} \cdots \hbox {d}t_n\nonumber \\&\quad = \sum _{d=r}^{n-1} \left( {\begin{array}{c}n\\ d\end{array}}\right) \left[ 1-\hbox {e}^{-\frac{\tau }{\theta }}\right] ^d \hbox {e}^{-\frac{\tau }{\theta _1}(n-d)} \left[ 1-\hbox {e}^{-\frac{x-\tau }{\theta _2}}\right] ^{n-d} \end{aligned}$$
(21)
$$\begin{aligned}&P(T_{r:n}<T_{n:n}<\tau <x)\nonumber \\&\quad = \int _{0}^{\tau } \int _{0}^{t_n} \cdots \int _{0}^{t_2} n! \left( \frac{1}{\theta _1}\right) ^n \hbox {e}^{-\frac{1}{\theta _1}\sum _{i=1}^n t_i} \hbox {d}t_1\cdots \hbox {d}t_{n-1} \hbox {d}t_n \nonumber \\&\quad =\left[ 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right] ^n. \end{aligned}$$
(22)

Thus, from Eqs. (17, 18, 19, 21, 22) we find that the distribution function of \(T_{n:n}\) at the point \(x>0\) as,

$$\begin{aligned} G(x)=P(T_{n:n}\le x)={\left\{ \begin{array}{ll} G_1(x)&{} \text {if } 0<x<\tau ,\\ G_2(x)&{}\text {if } \tau <x. \end{array}\right. } \end{aligned},$$
(23)

where

$$\begin{aligned} G_1(x)=\left[ 1-\hbox {e}^{-\frac{x}{\theta _1}}\right] ^n \end{aligned}$$
(24)

and

$$\begin{aligned} G_2(x)&= r \left( {\begin{array}{c}n\\ r\end{array}}\right) \left( \frac{1}{\theta _1}\right) \left( \frac{1}{\theta _2}\right) \sum _{i=0}^{r-1}\sum _{j=0}^{n-r} \hbox {e}^{-\frac{j}{\theta _2}x} \left( {\begin{array}{c}r-1\\ i\end{array}}\right) \left( {\begin{array}{c}n-r\\ j\end{array}}\right) \left( \frac{i}{\theta _1} -\frac{j}{\theta _2}\right) ^{-1}(-1)^{i+j}\nonumber \\&\quad \times \,\left[ \hbox {e}^{-\left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) \tau } -\hbox {e}^{-\left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) x}\right] +\sum _{d=r}^{n-1} \left( {\begin{array}{c}n\\ d\end{array}}\right) \hbox {e}^{-\frac{\tau }{\theta _1}(n-d)}\left[ 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right] ^d\left[ 1-\hbox {e}^{-\frac{x-\tau }{\theta _2}}\right] ^{n-d}\nonumber \\&\quad +\,\left[ 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right] ^n \end{aligned}$$
(25)

Hence, the density function of \(T_{n:n}\) at the point \(0<x<\tau\) is obtained as,

$$\begin{aligned} g(x)= \frac{d}{dx} G(x)= {\left\{ \begin{array}{ll} g_1(x)&{} \text {if } 0<x<\tau ,\\ g_2(x)&{} \text {if } \tau <x. \end{array}\right. } \end{aligned},$$
(26)

where

$$\begin{aligned} g_1(x)&=\frac{d}{dx} G_1(x) = \frac{n}{\theta _1} \left[ 1-\hbox {e}^{-\frac{x}{\theta _1}}\right] ^{n-1} \hbox {e}^{-\frac{x}{\theta _1}} \end{aligned}$$
(27)

and

$$\begin{aligned} g_2(x)&=\frac{d}{dx} G_2(x)\nonumber \\&= r \left( {\begin{array}{c}n\\ r\end{array}}\right) \left( \frac{1}{\theta _1}\right) \left( \frac{1}{\theta _2}\right) \sum _{i=0}^{r-1} \sum _{j=0}^{n-r}\left( {\begin{array}{c}r-1\\ i\end{array}}\right) \left( {\begin{array}{c}n-r\\ j\end{array}}\right) \left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) ^{-1}(-1)^{i+j} \nonumber \\&\quad \times \,\left[ \frac{i}{\theta _1} \hbox {e}^{-\frac{i}{\theta _1} x} -\frac{j}{\theta _2} \hbox {e}^{-\frac{j}{\theta _2}x-\left( \frac{i}{\theta _1} -\frac{j}{\theta _2}\right) \tau }\right] + \sum _{d=r}^{n-1} \left( {\begin{array}{c}n\\ d\end{array}}\right) \frac{n-d}{\theta _2}\hbox {e}^{-\frac{\tau }{\theta _1}(n-d)} \left[ 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right] ^d \nonumber \\&\quad \times \,\left[ 1-\hbox {e}^{-\frac{x-\tau }{\theta _2}}\right] ^{n-d-1} \hbox {e}^{-\frac{x-\tau }{\theta _2}}. \end{aligned}$$
(28)

Hence, \(E(T_{n:n})= \int _{0}^{\tau } x g_1(x)dx+\int _{\tau }^{\infty } x g_2(x)\hbox {d}x\). We find these two integrals separately.

$$\begin{aligned} \int _{0}^{\tau } x g_1(x)\hbox {d}x&=\int _{0}^{\tau } n \frac{x}{\theta _1} \left[ 1-\hbox {e}^{-\frac{x}{\theta _1}}\right] ^{n-1} \hbox {e}^{-\frac{x}{\theta _1}}\hbox {d}x\nonumber \\&= n \sum _{i=0}^{n-1} \left( {\begin{array}{c}n-1\\ i\end{array}}\right) (-1)^i \int _{0}^{\tau } \frac{x}{\theta _1} \hbox {e}^{-\frac{x}{\theta _1}(i+1)} \hbox {d}x\nonumber \\&= \sum _{i=0}^{n-1} \left( {\begin{array}{c}n\\ i+1\end{array}}\right) (-1)^i \frac{\theta _1}{(i+1)^2} \Gamma \left( \frac{\tau }{\theta _1}(i+1),1,2\right) \end{aligned}$$
(29)
$$\begin{aligned} \int _{\tau }^{\infty } x g_2(x)\hbox {d}x&= r \left( {\begin{array}{c}n\\ r\end{array}}\right) \left( \frac{1}{\theta _1}\right) \left( \frac{1}{\theta _2}\right) \sum _{i=0}^{r-1}\sum _{j=0}^{n-r}\left( {\begin{array}{c}r-1\\ i\end{array}}\right) \left( {\begin{array}{c}n-r\\ j\end{array}}\right) \left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) ^{-1}(-1)^{i+j} \nonumber \\&\quad \times \, \left[ \frac{i}{\theta _1}\int _{\tau }^{\infty }x \hbox {e}^{-\frac{i}{\theta _1} x}dx-\frac{j}{\theta _2} \int _{\tau }^{\infty } x \hbox {e}^{-\frac{j}{\theta _2}x-(\frac{i}{\theta _1}-\frac{j}{\theta _2})\tau }\hbox {d}x\right] + \sum _{d=r}^{n-1} \left( {\begin{array}{c}n\\ d\end{array}}\right) \frac{n-d}{\theta _2}\hbox {e}^{-\frac{\tau }{\theta _1}(n-d)} \nonumber \\&\quad \times \,\left[ 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right] ^d \sum _{i=0}^{n-d-1} \left( {\begin{array}{c}n-d-1\\ i\end{array}}\right) (-1)^i\int _{\tau }^{\infty } x \hbox {e}^{-\frac{x-\tau }{\theta _2}(i+1)} \hbox {d}x\nonumber \\&= r \left( {\begin{array}{c}n\\ r\end{array}}\right) \left( \frac{1}{\theta _1}\right) \left( \frac{1}{\theta _2}\right) \sum _{i=0}^{r-1}\sum _{j=0}^{n-r}\left( {\begin{array}{c}r-1\\ i\end{array}}\right) \left( {\begin{array}{c}n-r\\ j\end{array}}\right) \left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) ^{-1}(-1)^{i+j} \nonumber \\&\quad \times \, \left[ \frac{\theta _1}{i}\left[ 1-\Gamma \left( \frac{\tau i}{\theta _1}, 1,2\right) \right] -\hbox {e}^{-\left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) }\frac{\theta _2}{j} \left[ 1-\Gamma \left( \frac{\tau j}{\theta _2}, 1,2\right) \right] \right] + \sum _{d=r}^{n-1} \left( {\begin{array}{c}n\\ d\end{array}}\right) \frac{n-d}{\theta _2}\hbox {e}^{-\frac{\tau }{\theta _1}(n-d)} \nonumber \\&\quad \times \, \left[ 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right] ^d \sum _{i=0}^{n-d-1} \left( {\begin{array}{c}n-d-1\\ i\end{array}}\right) (-1)^i \hbox {e}^{\frac{\tau }{\theta _2}(i+1)} \frac{\theta _2}{i+1} \left[ 1-\Gamma \left( \frac{\tau }{\theta _2}(i+1),1,2\right) \right] . \end{aligned}$$
(30)

Thus,

$$\begin{aligned} E(T_{n:n})&=\sum _{i=0}^{n-1} \left( {\begin{array}{c}n\\ i+1\end{array}}\right) (-1)^i \frac{\theta _1}{(i+1)^2} \Gamma \left( \frac{\tau }{\theta _1}(i+1),1,2\right) \nonumber \\&\quad +\, r \left( {\begin{array}{c}n\\ r\end{array}}\right) \left( \frac{1}{\theta _1}\right) \left( \frac{1}{\theta _2}\right) \sum _{i=0}^{r-1}\sum _{j=0}^{n-r}\left( {\begin{array}{c}r-1\\ i\end{array}}\right) \left( {\begin{array}{c}n-r\\ j\end{array}}\right) \left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) ^{-1}(-1)^{i+j}\nonumber \\&\quad \times \, \left[ \frac{\theta _1}{i}\left[ 1-\Gamma \left( \frac{\tau i}{\theta _1}, 1,2\right) \right] -\hbox {e}^{-\left( \frac{i}{\theta _1}-\frac{j}{\theta _2}\right) }\frac{\theta _2}{j} \left[ 1-\Gamma \left( \frac{\tau j}{\theta _2}, 1,2\right) \right] \right] + \sum _{d=r}^{n-1} \left( {\begin{array}{c}n\\ d\end{array}}\right) \frac{n-d}{\theta _2}\hbox {e}^{-\frac{\tau }{\theta _1}(n-d)} \nonumber \\&\quad \times \, \left[ 1-\hbox {e}^{-\frac{\tau }{\theta _1}}\right] ^d \sum _{i=0}^{n-d-1} \left( {\begin{array}{c}n-d-1\\ i\end{array}}\right) (-1)^i \hbox {e}^{\frac{\tau }{\theta _2}(i+1)} \frac{\theta _2}{i+1} \left[ 1-\Gamma \left( \frac{\tau }{\theta _2}(i+1),1,2\right) \right] . \end{aligned}$$
(31)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Samanta, D., Koley, A., Gupta, A. et al. Exact Inference of a Simple Step-Stress Model with Hybrid Type-II Stress Changing Time. J Stat Theory Pract 14, 12 (2020). https://doi.org/10.1007/s42519-019-0072-5

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s42519-019-0072-5

Keywords

  • Step-stress life-tests
  • Maximum likelihood estimator
  • Approximate confidence interval
  • Bias-corrected accelerated bootstrap confidence interval
  • Optimality