Skip to main content
Log in

On generalized progressive hybrid censoring in presence of competing risks

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

The progressive Type-II hybrid censoring scheme introduced by Kundu and Joarder (Comput Stat Data Anal 50:2509–2528, 2006), has received some attention in the last few years. One major drawback of this censoring scheme is that very few observations (even no observation at all) may be observed at the end of the experiment. To overcome this problem, Cho et al. (Stat Methodol 23:18–34, 2015) recently introduced generalized progressive censoring which ensures to get a pre specified number of failures. In this paper we analyze generalized progressive censored data in presence of competing risks. For brevity we have considered only two competing causes of failures, and it is assumed that the lifetime of the competing causes follow one parameter exponential distributions with different scale parameters. We obtain the maximum likelihood estimators of the unknown parameters and also provide their exact distributions. Based on the exact distributions of the maximum likelihood estimators exact confidence intervals can be obtained. Asymptotic and bootstrap confidence intervals are also provided for comparison purposes. We further consider the Bayesian analysis of the unknown parameters under a very flexible beta–gamma prior. We provide the Bayes estimates and the associated credible intervals of the unknown parameters based on the above priors. We present extensive simulation results to see the effectiveness of the proposed method and finally one real data set is analyzed for illustrative purpose.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Balakrishnan N, Cramer E (2014) The art of progressive censoring. Birkhäuser, New York

    Book  MATH  Google Scholar 

  • Balakrishnan N, Childs A, Chandrasekar B (2002) An efficient computational method for moments of order statistics under progressive censoring. Stat Probab Lett 60:359–365

    Article  MathSciNet  MATH  Google Scholar 

  • Balakrishnan N, Xie Q, Kundu D (2009) Exact inference for a simple step stress model from the exponential distribution under time constraint. Ann Inst Stat Math 61:251–274

    Article  MathSciNet  MATH  Google Scholar 

  • Balakrishnan N, Kundu D (2013) Hybrid censoring models, inferential results and applications. Comput Stat Data Anal 57:166–209 (with discussion)

  • Balakrishnan N, Cramer E, Iliopoulos G (2014) On the method of pivoting the CDF for exact confidence intervals with illustration for exponential mean under life-test with time constraint. Stat Probab Letters 89:124–130

    Article  MathSciNet  MATH  Google Scholar 

  • Bhattacharya S, Pradhan B, Kundu D (2014) Analysis of hybrid censored competing risks data. Statistics 48(5):1138–1154

    Article  MathSciNet  MATH  Google Scholar 

  • Chan P, Ng H, Su F (2015) Exact likelihood inference for the two-parameter exponential distribution under Type-II progressively hybrid censoring. Metrika 78:747–770

    Article  MathSciNet  MATH  Google Scholar 

  • Chen SM, Bhattayacharya GK (1987) Exact confidence bound for an exponential parameter under hybrid censoring. Commun Stat Theory Methodol 16:2429–2442

    Article  MathSciNet  MATH  Google Scholar 

  • Childs A, Chandrasekhar B, Balakrishnan N, Kundu D (2003) Exact likelihood inference based on Type-I and Type-II hybrid censored samples from the exponential distribution. Ann Inst Stat Math 55:319–330

    MathSciNet  MATH  Google Scholar 

  • Cho Y, Sun H, Lee K (2015) Exact likelihood inference for an exponential parameter under generalized progressive hybrid censoring scheme. Stat Methodol 23:18–34

    Article  MathSciNet  Google Scholar 

  • Cohen AC (1963) Progressively censored samples in life testing. Technometrics 5:327–329

    Article  MathSciNet  MATH  Google Scholar 

  • Cox DR (1959) The analysis of exponentially lifetime distributed lifetime with two types of failures. J R Stat Soc Ser B 21:411–421

    MATH  Google Scholar 

  • Cramer E, Balakrishnan N (2013) On some exact distributional results based on Type-I progressively hybrid censored data from exponential distribution. Stat Methodol 10:128–150

    Article  MathSciNet  Google Scholar 

  • Crowder M (2001) Classical competing risks. Chapman & Hall/CRC, London

    Book  MATH  Google Scholar 

  • Epstein B (1954) Truncated life tests in the exponential case. Ann Math Stat 25:555–564

    Article  MathSciNet  MATH  Google Scholar 

  • Gorny J, Cramer E (2016) Exact likelihood inference for exponential distribution under generalized progressive hybrid censoring schemes. Stat Methodol 29:70–94

    Article  MathSciNet  Google Scholar 

  • Hemmati F, Khorram E (2013) Statistical analysis of log-normal distribution under type-II progressive hybrid censoring schemes. Commun Stat Simul Comput 42:52–75

    Article  MathSciNet  MATH  Google Scholar 

  • Hoel DG (1972) A representation of mortality data by competing risks. Biometrics 28:475–488

    Article  Google Scholar 

  • Kalbfleish JD, Prentice RL (1980) The statistical analysis of the failure time data. Wiley, New York

    Google Scholar 

  • Kundu D, Basu S (2000) Analysis of incomplete data in presence of competing risks. J Stat Plan Inference 87:221–239

    Article  MathSciNet  MATH  Google Scholar 

  • Kundu D, Joarder A (2006) Analysis of Type-II progressively hybrid censored data. Comput Stat Data Anal 50:2509–2528

    Article  MathSciNet  MATH  Google Scholar 

  • Kundu D, Gupta RD (2007) Analysis of hybrid life-tests in presence of competing risks. Metrika 65(2):159–170

    Article  MathSciNet  MATH  Google Scholar 

  • Lawless JF (1982) Statistical models and methods for lifetimes data. Wiley, New York

    MATH  Google Scholar 

  • Pena EA, Gupta AK (1990) Bayes estimation for the Marshall-Olkin exponential distribution. J R Stat Soc Ser B 52:379–389

    MathSciNet  MATH  Google Scholar 

  • Prentice RL, Kalbfleish JD, Peterson AV Jr, Flurnoy N, Farewell VT, Breslow NE (1978) The analysis of failure time points in presence of competing risks. Biometrics 34:541–554

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the referees for their constructive suggestions which have helped us to improve the manuscript significantly.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Debasis Kundu.

Appendix: the proof of the main theorem

Appendix: the proof of the main theorem

First we derive the distribution function of \(\widehat{\theta }_1\) which is given below.

$$\begin{aligned} F_{\widehat{\theta }_1| D_1>0}(x)= & {} P(\widehat{\theta }_1 \le x | D_1>0) \nonumber \\= & {} P(\widehat{\theta }_1 \le x, A | D_1>0)+P(\widehat{\theta }_1 \le x, B | D_1>0) +P(\widehat{\theta }_1 \le x, C | D_1>0) \nonumber \\= & {} \sum _{j=k}^{m-1} \sum _{i=1}^j P(\widehat{\theta }_1 \le x| J=j, D_1=i)P(J=j, D_1=i | D_1>0) \nonumber \\&+\sum _{i=1}^k P(\widehat{\theta }_1 \le x| B, D_1=i) P(B, D_1=i | D_1>0) \nonumber \\&+ \sum _{i=1}^m P(\widehat{\theta }_1 \le x| C, D_1=i) P(C, D_1=i | D_1>0), \end{aligned}$$
(7)

where,

$$\begin{aligned}&A=\{Z_{k:m:n}<T<Z_{m:m:n}\},\quad B=\{T<Z_{k:m:n}<Z_{m:m:n}\}, \\&C=\{Z_{k:m:n}<Z_{m:m:n}<T\}. \end{aligned}$$

Now to compute the terms on the right hand side of (7), we need the following Lemmas.

Lemma 1

The joint distribution of \(Z_{1:m:n},\ldots , Z_{J:m:n}\) given \(J=j, D_1=i\) for \(i=1,\ldots ,j\) and \(j=k,\ldots , m-1\) at \(z_1,\ldots , z_j\), is given by,

$$\begin{aligned}&f_{Z_{1:m:n},\ldots , Z_{j:m:n}|J=j, D_1=i}(z_1, \ldots , z_j)\nonumber \\&=\frac{\prod _{v=1}^j \gamma _v}{P(J=j, D_1=i)} \Big (\frac{1}{\theta _1}\Big )^i \Big (\frac{1}{\theta _2}\Big )^{j-i} e^{-\frac{1}{\theta } (\sum _{s=1}^j z_s (1+R_s) + TR^*_j)}. \end{aligned}$$
(8)

Proof of Lemma 1

For \(j=k,k+1,\ldots m \ \ {\text {and}} \ \ i=1,2,\ldots , j \), consider left side of (8),

$$\begin{aligned}&P(z_1<Z_{1:m:n}<z_1+dz_1,\ldots ,z_j<Z_{j:m:n}<z_j+dz_j|J=j, D_1=i) \nonumber \\&\quad =\frac{P(z_1<Z_{1:m:n}<z_1+dz_1,\ldots ,z_j<Z_{j:m:n}<z_j+dz_j,J=j, D_1=i)}{P(J=j, D_1=i)} \end{aligned}$$
(9)

Note that, the event \(\{z_1<Z_{1:m:n}<z_1+dz_1,\ldots ,z_j<Z_{j:m:n}<z_j+dz_j,J=j, D_1=i)\}\) for \(i=1,\ldots ,j; j=k,k+1,\ldots ,m\) is nothing but the failure times of j units till time point T and out of them i units have failed due to Cause-1. The probability of this event is the likelihood contribution of the data when \(T^*=T\). Thus (9) becomes,

$$\begin{aligned} \prod _{v=1}^j \gamma _v \bigg (\frac{1}{\theta _1}\bigg )^i \bigg (\frac{1}{\theta _2}\bigg )^{j-i}\frac{e^{-\frac{1}{\theta }\big [\sum _{s=1}^{j} (1+R_s)z_s + TR^*_j\big ]}}{P(J=j,D_1=i)} \; dz_1\ldots dz_j. \end{aligned}$$

\(\square \)

Lemma 2

The joint distribution of \(Z_{1:m:n},\ldots , Z_{k:m:n}\) given \(T<Z_{k:m:n}<Z_{m:m:n}, D_1=i\) for \(i=1,\ldots ,k\) at \(z_1,\ldots , z_k\), is given by

$$\begin{aligned}&f_{Z_{1:m:n},\ldots , Z_{k:m:n}|T<Z_{k:m:n}<Z_{m:m:n}, D_1=i}(z_1, \ldots , z_k) \nonumber \\&\quad =\frac{\prod _{v=1}^k \gamma _v}{P(T<Z_{k:m:n}<Z_{m:m:n}, D_1=i)} \Big (\frac{1}{\theta _1}\Big )^i \Big (\frac{1}{\theta _2}\Big )^{k-i} e^{-\frac{1}{\theta } (\sum _{s=1}^{k-1} z_s (1+R_s) + z_k (1+R^*_k))}. \end{aligned}$$
(10)

Proof of Lemma 2

For \(i=1,2, \ldots , k\), consider left side of (10),

$$\begin{aligned}&P(z_1<Z_{1:m:n}<z_1+dz_1,\ldots ,z_k<Z_{k:m:n}<z_k+dz_k|T<Z_{k:m:n}<Z_{m:m:n},D_1=i)\nonumber \\&\quad =\frac{P(z_1<Z_{1:m:n}<z_1+dz_1,\ldots ,z_k<Z_{k:m:n}<z_k+dz_k,T<Z_{k:m:n}<Z_{m:m:n},D_1=i)}{P(T<Z_{k:m:n}<Z_{m:m:n},D_1=i)} \end{aligned}$$
(11)

Note that, the event \(\{z_1<Z_{1:m:n}<z_1+dz_1,\ldots ,z_k<Z_{k:m:n}<z_k+dz_k,T<Z_{k:m:n}, D_1=i)\}\) for \(i=1,\ldots ,k\) is nothing but the failure times of k units till the experiment termination point \(Z_{k:m:n}\) and out of them i units have failed due to Cause-1. The probability of this event is the likelihood contribution of the data when \(T^*=Z_{k:m:n}\). Thus (11) becomes,

$$\begin{aligned} \prod _{v=1}^k \gamma _v\bigg (\frac{1}{\theta _1}\bigg )^i \bigg (\frac{1}{\theta _2}\bigg )^{k-i}\frac{e^{-\frac{1}{\theta }\big [\sum _{s=1}^{k-1} (1+R_s)z_s + z_k(1+R^*_k)\big ]}}{P(T<Z_{k:m:n}<Z_{m:m:n},D_1=i)} \; dz_1\ldots dz_k. \end{aligned}$$

\(\square \)

Lemma 3

The joint distribution of \(Z_{1:m:n},\ldots , Z_{m:m:n}\) given \(Z_{k:m:n}<Z_{m:m:n}<T, D_1=i\) for \(i=1,\ldots ,m\) at \(z_1,\ldots , z_m\), is given by,

$$\begin{aligned}&f_{Z_{1:m:n},\ldots , Z_{m:m:n}|Z_{k:m:n}<Z_{m:m:n}<T, D_1=i}(z_1, \ldots , z_m) \nonumber \\&\quad =\frac{\prod _{v=1}^m \gamma _v}{P(Z_{k:m:n}<Z_{m:m:n}<T, D_1=i)} \Big (\frac{1}{\theta _1}\Big )^i \Big (\frac{1}{\theta _2}\Big )^{m-i} e^{-\frac{1}{\theta }\sum _{s=1}^{m} (1+R_s)z_s}. \end{aligned}$$
(12)

Proof of Lemma 3

For \(i=1,2,\ldots m\), consider left side of (12),

$$\begin{aligned}&P(z_1<Z_{1:m:n}<z_1+dz_1,\ldots ,z_m<Z_{m:m:n}<z_m+dz_m|Z_{k:m:n}<Z_{m:m:n}<T,D_1=i) \nonumber \\&\quad =\frac{P(z_1<Z_{1:m:n}<z_1+dz_1,\ldots ,z_m<Z_{m:m:n}<z_m+dz_m,Z_{k:m:n}<Z_{m:m:n}<T,D_1=i)}{P(Z_{k:m:n}<Z_{m:m:n}<T,D_1=i)} \end{aligned}$$
(13)

Note that, the event \(\{z_1<Z_{1:m:n}<z_1+dz_1,\ldots ,z_m<Z_{m:m:n}<z_m+dz_m,Z_{k:m:n}<Z_{m:m:n}<T, D_1=i)\}\) for \(i=1,\ldots ,m\) is nothing but the failure times of m units till the experiment termination point \(Z_{m:m:n}\) and out of them i units have failed due to Cause-1. The probability of this event is the likelihood contribution of the data when \(T^*=Z_{m:m:n}\). Thus (13) becomes,

$$\begin{aligned} \prod _{v=1}^m \gamma _v\bigg (\frac{1}{\theta _1}\bigg )^i \bigg (\frac{1}{\theta _2}\bigg )^{m-i}\frac{e^{-\frac{1}{\theta }\sum _{s=1}^{m} (1+R_s)z_s}}{P(Z_{k:m:n}<Z_{m:m:n}<T,D_1=i)} \; dz_1\ldots dz_m. \end{aligned}$$
(14)

\(\square \)

Theorem 3

The conditional moment generating function of \(\widehat{\theta }_1\) given \(J=j, D_1=i\) for \(i=1,\ldots , j\) and \(j=k,\ldots , m-1\) is given by

$$\begin{aligned}&\quad E(e^{t\widehat{\theta }_1}|J=j, D_1=i)\\&\quad = \prod _{v=1}^j \frac{\gamma _v}{P(J=j,D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{j-i} \Big (\frac{1}{\theta }-\frac{t}{i}\Big )^{-j} \\&\qquad \times \sum _{v=0}^j \frac{(-1)^v e^{-T(\frac{1}{\theta }-\frac{t}{i})\gamma _{j-v+1}}}{\{\prod _{h=1}^v (\gamma _{j+1-v}-\gamma _{j+1-v+h})\}\{\prod _{h=1}^{j-v} (\gamma _h-\gamma _{j-v+1})\}}. \end{aligned}$$

Proof

$$\begin{aligned}&E[e^{t\widehat{\theta }_1}|J=j,D_1=i]\\&\quad =\frac{\prod _{v=1}^j \gamma _v}{P(J=j,D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{j-i}\\&\qquad \times \int _{0}^T \int _{0}^{z_j}\ldots \int _{0}^{z_2} e^{-z_1(1+R_1)(\frac{1}{\theta }-\frac{t}{i})} \ldots e^{-z_j(1+R_j)(\frac{1}{\theta }-\frac{t}{i})} e^{-TR^*_j(\frac{1}{\theta } -\frac{t}{i})} \ dz_1\ldots dz_j \end{aligned}$$

The above equality follows using Lemma 1,

$$\begin{aligned}&\quad =\frac{\prod _{v=1}^j \gamma _v}{P(J=j,D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{j-i} e^{-TR^*_j(\frac{1}{\theta }-\frac{t}{i})}\\&\qquad \times \int _{0}^T \int _{0}^{z_j}\ldots \int _{0}^{z_2} e^{-z_1} e^{-z_1(1+R_1)(\frac{1}{\theta }-\frac{t}{i})-1} \ldots e^{-z_j} e^{-z_{j}(1+R_{j})(\frac{1}{\theta }-\frac{t}{i})-1}dz_1\ldots dz_j\\&=\frac{\prod _{v=1}^j \gamma _v}{P(J=j,D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{j-i} \Big (\frac{1}{\theta }-\frac{t}{i}\Big )^{-j} \\&\qquad \times \sum _{v=0}^j \frac{(-1)^v e^{-T(\frac{1}{\theta }-\frac{t}{i})\gamma _{j-v+1}}}{\{\prod _{h=1}^v (\gamma _{j+1-v}-\gamma _{j+1-v+h})\}\{\prod _{h=1}^{j-v} (\gamma _h-\gamma _{j-v+1})\}} \end{aligned}$$

The last equality follows using Lemma 1 of Balakrishnan et al. (2002).\(\square \)

Corollary 1

The conditional distribution of \(\widehat{\theta }_1\) given \(J=j, D_1=i\) for \(i=1,\ldots ,j\) and \(j=k,\ldots ,m-1\) is given by,

$$\begin{aligned} f_{\widehat{\theta }_1|J=j,D_1=i}(x)&=\prod _{v=1}^j \frac{\gamma _v}{P(J=j,D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{j-i} \Big (\frac{1}{\theta }\Big )^{-j}\\&\qquad \times \sum _{v=0}^j \frac{(-1)^v e^{-\frac{T}{\theta }\gamma _{j-v+1}}}{\{\prod _{h=1}^v (\gamma _{j+1-v}-\gamma _{j+1-v+h})\}\{\prod _{h=1}^{j-v} (\gamma _h-\gamma _{j-v+1})\}}\\&\qquad \times f_G\Big (x;\frac{T}{i}\gamma _{j-v+1},j,\frac{i}{\theta }\Big ). \end{aligned}$$

Theorem 4

The conditional moment generating function of \(\widehat{\theta }_1\) given \(T<Z_{k:m:n}<Z_{m:m:n}, D_1=i\) for \(i=1,\ldots , k\) is given by,

$$\begin{aligned}&\quad E(e^{t\widehat{\theta }_1}|T<Z_{k:m:n}<Z_{m:m:n}, D_1=i)\\&\quad =\frac{\prod _{v=1}^k \gamma _v}{P(T<Z_{k:m:n}<Z_{m:m:n},D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{k-i} \Big (\frac{1}{\theta }-\frac{t}{i}\Big )^{-k} \\&\qquad \times \sum _{v=0}^{k-1} \frac{(-1)^v e^{-T(\frac{1}{\theta }-\frac{t}{i})\gamma _{k-v}}}{\{\prod _{j=1}^v (\gamma _{k-v}-\gamma _{k-v+j})\}\{\prod _{j=1}^{k-1-v} (\gamma _j-\gamma _{k-v})\}\gamma _{k-v}}. \end{aligned}$$

Proof

$$\begin{aligned}&E[e^{t\widehat{\theta }_1}|T<Z_{k:m:n}<Z_{m:m:n},D_1=i]\\&\quad =\frac{\prod _{v=1}^k \gamma _v}{P(T<Z_{k:m:n}<Z_{m:m:n},D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{k-i}\\&\qquad \times \int _{T}^\infty \int _{0}^{z_k}\ldots \int _{0}^{z_2} e^{-z_1(1+R_1)(\frac{1}{\theta }-\frac{t}{i})} \ldots e^{-z_{k-1}(1+R_{k-1})(\frac{1}{\theta }-\frac{t}{i})}e^{-z_k(1+R^*_k)(\frac{1}{\theta }-\frac{t}{i})} dz_1\ldots dz_k \end{aligned}$$

The above equality follows using Lemma 2,

$$\begin{aligned}&\quad =\frac{\prod _{v=1}^k \gamma _v}{P(T<Z_{k:m:n}<Z_{m:m:n},D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{k-i}\\&\qquad \times \int _{T}^\infty \Big [\int _{0}^{z_k}\ldots \int _{0}^{z_2} e^{-z_1} e^{-z_1(1+R_1)(\frac{1}{\theta }-\frac{t}{i})-1} \ldots e^{-z_{k-1}} e^{-z_{k-1}(1+R_{k-1})(\frac{1}{\theta }-\frac{t}{i})-1}dz_1\ldots dz_{k-1} \Big ]\\&\ \ \ \ \ \ \ \times e^{-z_k(1+R^*_k)(\frac{1}{\theta }-\frac{t}{i})} dz_k\\&\quad =\frac{\prod _{v=1}^k \gamma _v}{P(T<Z_{k:m:n}<Z_{m:m:n},D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{k-i} \frac{1}{(\frac{1}{\theta }-\frac{t}{i})^{k-1}}\\&\qquad \times \int _T^\infty \sum _{v=0}^{k-1} \frac{(-1)^v e^{-z_k(\frac{1}{\theta }-\frac{t}{i})\gamma _v}}{\{\prod _{j=1}^v (\gamma _{k-v}-\gamma _{k-v+j})\}\{\prod _{j=1}^{k-1-v} (\gamma _j-\gamma _{k-v})\}} dz_k \end{aligned}$$

The last equality follows using Lemma 1 of Balakrishnan et al. (2002)

$$\begin{aligned}&\quad =\frac{\prod _{v=1}^k \gamma _v}{P(T<Z_{k:m:n}<Z_{m:m:n},D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{k-i} \Big (\frac{1}{\theta }-\frac{t}{i}\Big )^{-k} \\&\qquad \times \sum _{v=0}^{k-1} \frac{(-1)^v e^{-T(\frac{1}{\theta }-\frac{t}{i})\gamma _{k-v}}}{\{\prod _{j=1}^v (\gamma _{k-v}-\gamma _{k-v+j})\}\{\prod _{j=1}^{k-1-v} (\gamma _j-\gamma _{k-v})\}\gamma _{k-v}}. \end{aligned}$$

\(\square \)

Corollary 2

The conditional distribution of \(\widehat{\theta }_1\) given \(T<Z_{k:m:n}<Z_{m:m:n}, D_1=i\) for \(i=1,\ldots , k\) is given by,

$$\begin{aligned}&f_{\widehat{\theta }_1|T<Z_{k:m:n}<Z_{m:m:n},D_1=i}(x)\\&\quad =\frac{\prod _{v=1}^k \gamma _v}{P(T<Z_{k:m:n}<Z_{m:m:n},D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{k-i} \Big (\frac{1}{\theta }\Big )^{-k}\\&\qquad \times \sum _{v=0}^{k-1} \frac{(-1)^v e^{-\frac{T}{\theta }\gamma _{k-v}}}{\{\prod _{j=1}^v (\gamma _{k-v}-\gamma _{k-v+j})\}\{\prod _{j=1}^{k-1-v} (\gamma _j-\gamma _{k-v})\}\gamma _{k-v}}\\&\qquad \times f_G\Big (x;\frac{T}{i}\gamma _{k-v},k,\frac{i}{\theta }\Big ). \end{aligned}$$

Theorem 5

The moment generating function of \(\widehat{\theta }_1\) given \(Z_{k:m:n}<Z_{m:m:n}<T, D_1=i\) for \(i=1,\ldots ,m\) is given by,

$$\begin{aligned}&E(e^{t\widehat{\theta }_1}|Z_{k:m:n}<Z_{m:m:n}<T, D_1=i)\\&\quad =\frac{\prod _{v=1}^m \gamma _v}{P(Z_{k:m:n}<Z_{m:m:n}<T,D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{m-i} \Big (\frac{1}{\theta }-\frac{t}{i}\Big )^{-m}\\&\qquad \times \sum _{v=0}^{m} \frac{(-1)^v e^{-T(\frac{1}{\theta }-\frac{t}{i})(\gamma _{m-v+1}-\gamma _{m+1})}}{\{\prod _{j=1}^v (\gamma _{m-v+1}-\gamma _{m-v+j+1})\}\{\prod _{j=1}^{m-v} (\gamma _j-\gamma _{m-v+1})\}}. \end{aligned}$$

Proof

$$\begin{aligned}&E[e^{t\widehat{\theta }_1}|Z_{k:m:n}<Z_{m:m:n}<T,D_1=i]\\&\quad =\frac{\prod _{v=1}^m \gamma _v}{P(Z_{k:m:n}<Z_{m:m:n}<T,D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{m-i}\\&\qquad \times \int _{0}^T \int _{0}^{z_m}\ldots \int _{0}^{z_2} e^{-z_1(1+R_1)(\frac{1}{\theta }-\frac{t}{i})} \ldots e^{-z_{m-1}(1+R_{m-1})(\frac{1}{\theta }-\frac{t}{i})}e^{-z_m(1+R_m)(\frac{1}{\theta }-\frac{t}{i})} dz_1\ldots dz_m \end{aligned}$$

The above equality follows using Lemma 3,

$$\begin{aligned}&\quad =\frac{\prod _{v=1}^m \gamma _v}{P(Z_{k:m:n}<Z_{m:m:n}<T,D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{m-i}\\&\qquad \times \int _{0}^T \ldots \int _{0}^{z_2} e^{-z_1} e^{-z_1(1+R_1)(\frac{1}{\theta }-\frac{t}{i})-1} \ldots e^{-z_m} e^{-z_m(1+R_m)(\frac{1}{\theta }-\frac{t}{i})-1} dz_1\ldots dz_m\\&\quad =\frac{\prod _{v=1}^m \gamma _v}{P(Z_{k:m:n}<Z_{m:m:n}<T,D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{m-i} \Big (\frac{1}{\theta }-\frac{t}{i}\Big )^{-m}\\&\qquad \times \sum _{v=0}^{m} \frac{(-1)^v e^{-T(\frac{1}{\theta }-\frac{t}{i})(\gamma _{m-v+1}-\gamma _{m+1})}}{\{\prod _{j=1}^v (\gamma _{m-v+1}-\gamma _{m-v+j+1})\}\{\prod _{j=1}^{m-v} (\gamma _j-\gamma _{m-v+1})\}} \end{aligned}$$

The last equality follows using Lemma 1 of Balakrishnan et al. (2002).\(\square \)

Corollary 3

The conditional distribution of \(\widehat{\theta }_1\) given \(Z_{k:m:n}<Z_{m:m:n}<T, D_1=i\) for \(i=1,\ldots , m\) is given by,

$$\begin{aligned}&f_{\widehat{\theta }_1|Z_{k:m:n}<Z_{m:m:n}<T,D_1=i}(x)\\&\quad =\frac{\prod _{v=1}^m \gamma _v}{P(Z_{k:m:n}<Z_{m:m:n}<T,D_1=i)} \bigg (\frac{1}{\theta _1}\bigg )^i\bigg (\frac{1}{\theta _2}\bigg )^{m-i} \Big (\frac{1}{\theta }\Big )^{-m}\\&\qquad \times \sum _{v=0}^{m} \frac{(-1)^v e^{-\frac{T}{\theta }(\gamma _{m-v+1}-\gamma _{m+1})}}{\{\prod _{j=1}^v (\gamma _{m-v+1}-\gamma _{m-v+j+1})\}\{\prod _{j=1}^{m-v} (\gamma _j-\gamma _{m-v+1})\}}\\&\qquad \times f_G\left( x;\frac{T}{i}(\gamma _{m-v+1}-\gamma _{m+1}),m,\frac{i}{\theta }\right) . \end{aligned}$$

Proof of Theorem 1

Combining corollaries 13, we get the first part of Theorem 1. \(\square \)

\(\mathrm{Derivation of} P(D_1=0).\)

$$\begin{aligned} P(D_1=0)=&P(D_1=0,Z_{k:m:n}<T<Z_{m:m:n})+P(D_1=0,T<Z_{k:m:n}<Z_{m:m:n})\\&+P(D_1=0,Z_{k:m:n}<Z_{m:m:n}<T)\\ =&P(Z_{k:m:n}<T<Z_{m:m:n})P(D_1=0|Z_{k:m:n}<T<Z_{m:m:n})\\&+P(T<Z_{k:m:n}<Z_{m:m:n})P(D_1=0|T<Z_{k:m:n}<Z_{m:m:n})\\&+P(Z_{k:m:n}<Z_{m:m:n}<T)P(D_1=0|Z_{k:m:n}<Z_{m:m:n}<T). \end{aligned}$$

We find each of the above probabilities separately.

$$\begin{aligned}&P(Z_{k:m:n}<T<Z_{m:m:n}) =\sum _{j=k}^{m-1} P(Z_{j:m:n}<T<Z_{j+1:m:n})\\&\quad =\sum _{j=k}^{m-1} \prod _{v=1}^{j+1} \gamma _v \Big (\frac{1}{\theta }\Big )^{j+1}\int _{T}^{\infty }\int _{0}^T\ldots \int _{0}^{z_2}e^{-\frac{1}{\theta }\sum _{i=1}^{j} z_i(1+R_i)} e^{-\frac{1}{\theta } z_{j+1}(1+R^*_{j+1})} dz_1 \ldots dz_j dz_{j+1}\\&\quad =\sum _{j=k}^{m-1} \prod _{v=1}^{j+1} \gamma _v \sum _{u=0}^{j} \frac{(-1)^u e^{-\frac{T}{\theta }\gamma _{j-u+1}}}{\{\prod _{v=1}^u (\gamma _{j-u+1}-\gamma _{j+v-u+1})\}\{\prod _{v=1}^{j-u} (\gamma _v-\gamma _{j-u+1})\}\{\gamma _{j-u+1}-u\}}. \end{aligned}$$
$$\begin{aligned} P(D_1=0|J=j)=\Big (\frac{\theta _1}{\theta _1+\theta _2}\Big )^j. \end{aligned}$$
$$\begin{aligned}&P(T<Z_{k:m:n}<Z_{m:m:n})\\&\quad =\prod _{v=1}^k \gamma _v \Big (\frac{1}{\theta }\Big )^k \int _T^{\infty } \int _{0}^{z_k}\ldots \int _0^{z_2} e^{-\frac{1}{\theta }\Big (\sum _{i=1}^k z_i(1+R_i)+z_k\gamma _k\Big )} dz_1\ldots dz_{k-1} dz_k\\&\quad =\prod _{v=1}^k \gamma _v \sum _{v=0}^{k-1} \frac{(-1)^v e^{-\frac{T}{\theta }\gamma _{k-v}}}{\{\prod _{j=1}^v (\gamma _{k-v}-\gamma _{k-v+j})\}\{\prod _{j=1}^{k-1-v} (\gamma _j-\gamma _{k-v})\}\gamma _{k-v}}.\\&P(D_1=0|T<Z_{k:m:n}<Z_{m:m:n}) =\Big (\frac{\theta _1}{\theta _1+\theta _2}\Big )^k\\&P(Z_{k:m:n}<Z_{m:m:n}<T)\\&\quad =\prod _{v=1}^m \gamma _v \Big (\frac{1}{\theta }\Big )^m \int _0^{T} \int _{0}^{z_m}\ldots \int _0^{z_2} e^{-\frac{1}{\theta }\sum _{i=1}^m z_i(1+R_i)}dz_1\ldots dz_{m-1} dz_m\\&\quad =\prod _{v=1}^m \gamma _v \sum _{v=0}^{m} \frac{(-1)^v e^{-\frac{T}{\theta }(\gamma _{m-v+1}-\gamma _{m+1})}}{\{\prod _{j=1}^v (\gamma _{m-v+1}-\gamma _{m-v+j+1})\}\{\prod _{j=1}^{m-v} (\gamma _j-\gamma _{m-v+1})\}}.\\&P(D_1=0|Z_{k:m:n}<Z_{m:m:n}<T)=\Big (\frac{\theta _1}{\theta _1+\theta _2}\Big )^m. \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Koley, A., Kundu, D. On generalized progressive hybrid censoring in presence of competing risks. Metrika 80, 401–426 (2017). https://doi.org/10.1007/s00184-017-0611-6

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-017-0611-6

Keywords

Mathematics Subject Classification

Navigation