Skip to main content
Log in

Point and interval estimation under progressive type-I interval censoring with random removal

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

This work considers point and interval estimation based on data from a life test under progressive type-I interval censoring with random removal. The asymptotic properties of the maximum likelihood estimators (MLEs) are established under appropriate regularity conditions. Asymptotic confidence intervals and \(\beta \)-content \(\gamma \)-level tolerance interval are obtained by using the asymptotic normality of MLEs. A simulation study is undertaken to assess the performance of the MLEs, confidence intervals and tolerance interval. Lastly, the minimum sample size required to achieve a desired \(\beta \)-content \(\gamma \)-level tolerance interval is determined.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Aggarwala R (2001) Progressive interval censoring: some mathematical results with applications to inference. Commun Stat-Theory Methods 30(8–9):1921–1935

    Article  MathSciNet  Google Scholar 

  • Ashour SK, Afify WM (2007) Statistical analysis of exponentiated Weibull family under \(\text{ T }\)ype-\(\text{ I }\) progressive interval censoring with random removals. J Appl Sci Res 3(12):1851–1863

    Google Scholar 

  • Balakrishnan N, Aggarwala R (2000) Progressive censoring: theory, methods, and applications. Springer Science & Business Media, New York

    Book  Google Scholar 

  • Balakrishnan N, Beutner E, Cramer E (2010) Exact two sample non-parametric confidence, prediction, and tolerance intervals based on ordinary and progressively type-\(\text{ II }\) right censored data. Test 19(1):68–91

    Article  MathSciNet  Google Scholar 

  • Berkson J, Gage RP (1950) Calculation of survival rates for cancer. Proc Staff Meet Mayo Clin 25:270–286

    Google Scholar 

  • Budhiraja S, Pradhan B, Sengupta D (2017) Maximum likelihood estimators under progressive type-\(\text{ I }\) interval censoring. Stat Prob Lett 123:202–209

    Article  MathSciNet  Google Scholar 

  • Catchpole EA, Morgan BJT (1997) Detecting parameter redundancy. Biometrika 84(1):187–196

    Article  MathSciNet  Google Scholar 

  • Chen DG, Lio YL (2010) Parameter estimations for generalized exponential distibution under progressive type-\(\text{ I }\) interval censoring. Comput Stat Data Anal 54(6):1581–1591

    Article  Google Scholar 

  • Chen DG, Lio YL, Jiang N (2013) Lower confidence limits on the generalized exponential distribution percentiles under progressive type-\(\text{ I }\) interval censoring. Commun Stat-Simul Comput 42(9):2106–2117

    Article  MathSciNet  Google Scholar 

  • Cheng C, Chen J, Li Z (2010) A new algorithm for maximum likelihood estimation with progressive \(\text{ T }\)ype-\(\text{ I }\) interval censored data. Commun Stat-Simul Comput 39(4):750–766

    Article  MathSciNet  Google Scholar 

  • Childs A, Balakrishnan N (2000) Conditional inference procedures for the \(\text{ L }\)aplace distribution when the observed samples are progressively censored. Metrika 52(3):253–265

    Article  MathSciNet  Google Scholar 

  • Cohen AC (1963) Progressively censored samples in life testing. Technometrics 5:327–339

    Article  MathSciNet  Google Scholar 

  • Ding C, Yang C, Tse SK (2010) Accelerated life test sampling plans for the \(\text{ W }\)eibull distribution under \(\text{ T }\)ype-\(\text{ I }\) progressive interval censoring with random removals. J Stat Comput Simul 80(8):903–914

    Article  MathSciNet  Google Scholar 

  • Faulkenberry GD, Weeks DL (1968) Sample size determination for tolerance limits. Technometrics 10(2):343–348

    Article  Google Scholar 

  • Fernández AJ (2010) Tolerance limits for k-out-of-n systems with exponentially distributed component lifetimes. IEEE Trans Reliab 59(2):331–337

    Article  Google Scholar 

  • Goodmann LA, Madansky A (1962) Parameter-free and non-parametric tolerance limits: the exponential case. Technometrics 4(1):75–95

    Article  MathSciNet  Google Scholar 

  • Kumbhar RR, Shirke DT (2004) Tolerance limits for lifetime distribution of k-unit parallel system. J Stat Comput Simul 74(3):201–213

    Article  MathSciNet  Google Scholar 

  • Lawless JF (2011) Statistical models and methods for lifetime data. Wiley, New York

    MATH  Google Scholar 

  • Lehmann EL, Casella G (1998) Theory of point estimation, 2nd edn. Springer, New York

    MATH  Google Scholar 

  • Lin YJ, Lio YL (2012) Bayesian inference under progressive type-\(\text{ I }\) interval censoring. J Appl Stat 39(8):1811–1824

    Article  MathSciNet  Google Scholar 

  • Ng HKT, Wang Z (2009) Statistical estimation for the parameters of \(\text{ W }\)eibull distribution based on progressively type-\(\text{ I }\) interval censored sample. J Stat Comput Simul 79(2):145–159

    Article  MathSciNet  Google Scholar 

  • Patel JK (1986) Tolerance limits- a review. Commun Stat-Theory Methods 15(9):2719–2762

    Article  MathSciNet  Google Scholar 

  • Pradhan B (2007) Point and interval estimation for the lifetime distribution of a k-unit parallel system based on progressively type-\(\text{ II }\) censored data. Econ Qual Control 22(2):175–186

    Article  Google Scholar 

  • Rao, C.R.: Maximum likelihood estimation for the multinomial distribution. Sankhy\({\bar{a}}\)18(1/2), 139–148 (1957)

  • Shin H, Lee K (2012) Estimation in the exponential distribution under progressive type-\(\text{ I }\) interval censoring with semi-missing data. J Korean Data Inf Sci Soc 23(6):1271–1277

    Google Scholar 

  • Shirke DT, Kumbhar RR, Kundu D (2005) Tolerance interval for exponentiated scale family of distributions. J Appl Stat 32(10):1067–1074

    Article  MathSciNet  Google Scholar 

  • Singh, S., Tripathi. Y.M.: Estimating the parameters of an inverse Weibull distribution under progressive Type-I interval censoring. Stat Pap. doi:10.1007/s00362-016-0750-2 (2016)

  • Tsai TR, Lin CW (2010) Acceptance sampling plans under progressive interval censoring with likelihood ratio. Stat Pap 51(2):259–271

    Article  MathSciNet  Google Scholar 

  • Volterman W, Balakrishnan N (2010) Exact non parametric confidence, prediction and tolerance interval based on multiple sample type-\(\text{ II }\) right censored data. J Stat Plann Inference 140(11):3306–3316

    Article  Google Scholar 

  • Wilks SS (1942) Statistical prediction with special reference to the problem of tolerance limits. Ann Math Stat 13(4):400–409

    Article  MathSciNet  Google Scholar 

  • Wu SJ, Chang CT, Liao KJ, Huang SR (2008) Planning of progressive group-censoring life tests with cost considerations. J Appl Stat 35(11):1293–1304

    Article  MathSciNet  Google Scholar 

  • Wu SJ, Huang SR (2010) Optimal progressive group-censoring plans for exponential distribution in presence of cost constraint. Stat Pap 51(2):431–443

    Article  MathSciNet  Google Scholar 

  • Wu SJ, Lin Y, Chen ST (2008) Optimal step-stress test under type-\(\text{ I }\) progressive group censoring with random removals. J Stat Plann Inference 138(4):817–826

    Article  MathSciNet  Google Scholar 

  • Xiang L, Tse SK (2005) Maximum likelihood estimation in survival studies under progressive interval censoring with random removals. J Biopharm Stat 15(6):981–991

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank two anonymous reviewers for their helpful comments and suggestions They are grateful to Professor Debasis Sengupta, Applies Statistics Unit, Indian Statistical Institute, Kolkata for his valuable suggestions and comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sonal Budhiraja.

Appendix

Appendix

Here we provide the proofs of the lemmas and results stated in the main article.

For the sake of convenience, we drop index i in the proof as items are independent and identical. Let \(\delta _{1,j}\) and \(\delta _{2,j}\) represent the indicator functions that an item fails in jth interval, and is censored at \(T_j\) respectively. Note that \(\sum _{l=j+1}^k (\delta _{1,l}+\delta _{1,l})=1\) indicates that an item is at risk at \(T_j\), thus \(P\left( \sum _{l=j+1}^k (\delta _{1,l}+\delta _{1,l})=1\right) =\prod _{l=0}^{j}(1-p_l)(1-q_l)\), where \(p_0=q_0=0\). In all the subsequent expressions we use \(F(T_j)\) instead of \(F(T_j;\varvec{\theta })\) for \(j=1,2,\ldots ,k\).

Proof of Lemma 1 (i)

By differentiating the contribution of an item in log-likelihood function (6) with respect to parameter \(\theta _u\) for \(u=1,2,\ldots ,m\), we have,

(15)

By using (2) and (3), we have \(\displaystyle \mathrm {E}[\delta _{1,j}]=\prod \nolimits _{l=0}^{j-1} (1-p_l)(1-q_l)q_j\) and \(\displaystyle \mathrm {E}[\delta _{2,j}]=\prod \nolimits _{l=0}^{j-1} (1-p_l)(1-q_l)(1-q_j)p_j\), where \(p_0=q_0=0\). Then we get, for \(u=1,2,\ldots ,m,\)

The last two expressions hold, since \(\prod _{l=0}^{j-1}(1-q_l)=1-F(T_{j-1})\) and \(p_k=1\). This implies . Hence Lemma 1 (i) holds. \(\square \)

Proof of Lemma 1 (ii)

On differentiating equation with respect to \(p_j\) for \(j=1,2,\ldots ,k-1\), we have

Now \(\displaystyle \mathrm {E}\left[ \sum _{l=j+1}^k (\delta _{1,l}+\delta _{1,l})\right] =P\left( \sum _{l=j+1}^k (\delta _{1,l}+\delta _{1,l})=1\right) =\prod _{l=0}^{j}(1-p_l)(1-q_l)\), for \(j=1,\ldots ,k-1\). So,

Hence . \(\square \)

Proof of Lemma 2 (i)

We have, for \(v=1,2,\ldots ,m\),

(16)

where \(T_0=0\). By using (2) and (3) and following arguments similar to the ones used for proving Lemma 1 (i), we have,

$$\begin{aligned} \sum _{j=1}^k \mathrm {E} \left[ \frac{\delta _{1,j}}{F(T_j)-F(T_{j-1})} \left( \frac{\partial ^2 F(T_j)}{\partial {\theta _u} \partial {\theta _v}}-\frac{\partial ^2 F(T_{j-1})}{\partial {\theta _u} \partial {\theta _v}} \right) -\frac{\delta _{2,j}}{1-F(T_{j})}\frac{\partial ^2 F(T_j)}{\partial {\theta _u} \partial {\theta _v}}\right] =0. \end{aligned}$$

Thus for \(u,v=1,2,\ldots ,m\), we have

$$\begin{aligned} \displaystyle \mathrm {E}&\left[ -\frac{\partial ^ 2 {\mathcal {l}}^{I^*} (\varvec{\theta }) }{\partial {\theta _u} \partial {\theta _v}}\right] \nonumber \\&\quad = \sum _{j=1}^k \frac{\mathrm {E}[\delta _{1,j}]}{\left( F(T_j)-F(T_{j-1})\right) ^2} \left( \frac{\partial F(T_j)}{\partial \theta _u}-\frac{\partial F(T_{j-1})}{\partial \theta _u}\right) \left( \frac{\partial F(T_j)}{\partial \theta _v}-\frac{\partial F(T_{j-1})}{\partial \theta _v}\right) \nonumber \\&\qquad +\sum _{j=1}^k \frac{\mathrm {E}[\delta _{2,j}]}{\left( 1-F(T_{j})\right) ^2} \frac{\partial F(T_j)}{\partial \theta _u}\frac{\partial F(T_j)}{\partial \theta _v} \nonumber \\&\quad = \sum _{j=1}^k \frac{\prod _{l=0}^{j-1}(1-p_l)(1-q_l)}{\left( 1-F(T_{j-1})\right) ^2} \left[ \frac{1}{q_j} \left( \frac{\partial F(T_j)}{\partial \theta _u}-\frac{\partial F(T_{j-1})}{\partial \theta _u}\right) \left( \frac{\partial F(T_j)}{\partial \theta _v}-\frac{\partial F(T_{j-1})}{\partial \theta _v}\right) \right. \nonumber \\&\qquad \left. +\; \frac{1}{1-q_j}\frac{\partial F(T_j)}{\partial \theta _u}\frac{\partial F(T_j)}{\partial \theta _v} \right] . \end{aligned}$$
(17)

Next by using (15), we have

$$\begin{aligned}&\displaystyle \frac{\partial {\mathcal {l}}^{I^*} (\varvec{\theta })}{\partial \theta _u} \frac{\partial {\mathcal {l}}^{I^*} (\varvec{\theta })}{\partial \theta _v}\\&\quad = \left[ \sum _{j=1}^k \frac{1}{1-F(T_{j-1})} \left[ \frac{\delta _{1,{j}}}{q_j} \left( \frac{\partial F(T_j)}{\partial \theta _u}-\frac{\partial F(T_{j-1})}{\partial \theta _u}\right) -\frac{\delta _{2,{j}}}{1- q_j}\frac{\partial F(T_j)}{\partial \theta _u} \right] \right] \\&\qquad \times \left[ \sum _{s=1}^k \frac{1}{1-F(T_{s-1})} \left[ \frac{\delta _{1,{s}}}{q_s} \left( \frac{\partial F(T_s)}{\partial \theta _v}-\frac{\partial F(T_{s-1})}{\partial \theta _v}\right) -\frac{\delta _{2,{s}}}{1- q_s}\frac{\partial F(T_s)}{\partial \theta _v} \right] \right] \end{aligned}$$

Since out of the 2k indicator variables, \(\delta _{1,1},\delta _{1,2},\ldots , \delta _{1,k}, \delta _{2,1}, \delta _{2,2},\ldots ,\delta _{2,k}\), exactly one can take value 1 and others are zero, we observe that when \(j\ne s\), \(\displaystyle P\left( \delta _{1,j}\delta _{1,s}=1\right) = P\left( \delta _{2,j}\delta _{2,s}=1\right) =0.\) Also \(P\left( \delta _{1,j}\delta _{2,s}=1\right) =0\) for all j and s. As a result, we have, for \(j\ne s\),

$$\begin{aligned} \mathrm {E}\left[ \delta _{1,j}\delta _{1,s}\right] =\mathrm {E}\left[ \delta _{2,j} \delta _{2,s}\right] =0 \end{aligned}$$
(18)

and when \(j=s\), we get

$$\begin{aligned} \mathrm {E}\left[ \delta _{1,j}^2\right] =P(\delta _{1,j}=1), \mathrm {E}\left[ \delta _{2,j}^2\right] =P(\delta _{2,j}=1) \text{ and } \mathrm {E}\left[ \delta _{1,j}\delta _{2,j}\right] =0. \end{aligned}$$
(19)

Thus by using (18) and (19), we get

$$\begin{aligned}&\displaystyle \mathrm {E}\left[ \frac{\partial {\mathcal {l}}^{I^*} (\varvec{\theta })}{\partial \theta _u} \frac{\partial {\mathcal {l}}^{I^*} (\varvec{\theta })}{\partial \theta _v}\right] \nonumber \\&\quad = \sum _{j=1}^k \frac{1}{\left( 1-F(T_{j-1})\right) ^2} \left[ \frac{\mathrm {E}[\delta _{1,j}^2]}{q_j^2}\left( \frac{\partial F(T_j)}{\partial \theta _u}-\frac{\partial F(T_{j-1})}{\partial \theta _u}\right) \left( \frac{\partial F(T_j)}{\partial \theta _v}-\frac{\partial F(T_{j-1})}{\partial \theta _v}\right) \right. \nonumber \\&\qquad \left. - \;\frac{\mathrm {E}[\delta _{1,j}\delta _{2,j} ]}{q_j(1-q_j)} \left( \frac{\partial F(T_j)}{\partial \theta _u}-\frac{\partial F(T_{j-1})}{\partial \theta _u}\right) \frac{\partial F(T_j)}{\partial \theta _v}\right. \nonumber \\&\qquad \left. - \;\frac{\mathrm {E}[\delta _{1,j}\delta _{2,j} ]}{q_j(1-q_j)} \left( \frac{\partial F(T_j)}{\partial \theta _v}-\frac{\partial F(T_{j-1})}{\partial \theta _v}\right) \frac{\partial F(T_j)}{\partial \theta _u}+ \frac{\mathrm {E}[\delta _{2,j}^2]}{(1-q_j)^2}\frac{\partial F(T_j)}{\partial \theta _u} \frac{\partial F(T_j)}{\partial \theta _v}\right] . \nonumber \\&\quad = \sum _{j=1}^k \frac{\prod _{l=0}^{j-1}(1-p_l)(1-q_l)}{\left( 1-F(T_{j-1})\right) ^2} \left[ \frac{1}{q_j}\left( \frac{\partial F(T_j)}{\partial \theta _u}-\frac{\partial F(T_{j-1})}{\partial \theta _u}\right) \left( \frac{\partial F(T_j)}{\partial \theta _v}-\frac{\partial F(T_{j-1})}{\partial \theta _v}\right) \right. \nonumber \\&\qquad \left. + \;\frac{p_j}{(1-q_j)}\frac{\partial F(T_j)}{\partial \theta _u} \frac{\partial F(T_j)}{\partial \theta _v}\right] . \end{aligned}$$
(20)

Thus, using Eqs. (17) and (20), part (i) of Lemma 2 is proved.

Proof of Lemma 2 (ii)

On differentiating (15) with respect to \(p_s\) for \(s=1,2,\ldots ,k-1\), we have . Hence for \(u=1,2,\ldots ,m\) and \(s=1,2,\ldots ,k-1\), we get that

(21)

Next we consider the LHS of part (ii) of Lemma 2. For \(s=1,2,\ldots ,k-1\), we have

(22)

Hence by using (21) and (22), we prove that Lemma 2 (ii) holds.

Proof of Lemma 2 (ii)

From Eq. (16), it is obvious that

and ,    for \(j \ne s\).

Next we consider its LHS,

By using (16), we have

Note that by using (18) and (19), we have, \(\mathrm {E}\left[ \delta _{1,j} \delta _{2,s} \right] =0\) for all j and s, and \(\mathrm {E}\left[ \delta _{2,j} \delta _{2,s} \right] =\mathrm {E}\left[ \delta _{2,j}\right] \) for \(j=s\) and 0 otherwise. Also we have

$$\begin{aligned} \begin{aligned} \mathrm {E}\left[ \delta _{1,j} \sum _{l=s+1}^k (\delta _{1,l}+\delta _{2,l})\right]&=\left\{ \begin{array}{cc} \mathrm {E}\left[ \delta _{1,j}\right] &{} \text{ when } j>s \\ 0&{} \text{ otherwise }. \end{array} \right. \\ \mathrm {E}\left[ \delta _{2,j} \sum _{l=s+1}^k (\delta _{1,l}+\delta _{2,l})\right]&=\left\{ \begin{array}{cc} \mathrm {E}\left[ \delta _{2,j}\right] &{} \text{ when } j>s\\ 0&{} \text{ otherwise }. \end{array} \right. \\ \end{aligned} \end{aligned}$$
(23)

We also note that an item at risk at \(T_j\), must be at risk at \(T_s\). Thus it is easy to deduce that

$$\begin{aligned} \begin{aligned} \mathrm {E}\left[ \sum _{l=j+1}^k (\delta _{1,l}+\delta _{2,l}) \sum _{l=s+1}^k (\delta _{1,l}+\delta _{2,l})\right]&=\left\{ \begin{array}{cc} \mathrm {E}\left[ \sum _{l=j+1}^k (\delta _{1,l}+\delta _{2,l})\right] &{} \text{ when } j>s\\ \mathrm {E}\left[ \sum _{l=s+1}^k (\delta _{1,l}+\delta _{2,l})\right] &{} \text{ when } s>j \\ \mathrm {E}\left[ \sum _{l=j+1}^k (\delta _{1,l}+\delta _{2,l})\right] &{} \text{ when } j=s\\ \end{array} \right. . \end{aligned}\nonumber \\ \end{aligned}$$
(24)

and

By using (23) and (24), for \(j>s\), we have

$$\begin{aligned}- & {} \frac{\mathrm {E}\left[ \delta _{2,j}\sum _{l=s+1}^k (\delta _{1,l}+\delta _{2,l})\right] }{p_j(1-p_s)}+ \frac{\mathrm {E}\left[ \sum _{l=j+1}^k (\delta _{1,l}+\delta _{2,l})\sum _{l=s+1}^k (\delta _{1,l}+\delta _{2,l})\right] }{(1-p_j)(1-p_s)}\\&=\frac{\prod _{l=0}^{j-1}(1-p_l)(1-q_l)}{1-p_s} \left[ -\frac{(1-q_j)p_j}{p_j}+\frac{(1-q_j)(1-p_j)}{1-p_j}\right] =0. \end{aligned}$$

Similarly for \(j<s\),

$$\begin{aligned} \displaystyle -\frac{\mathrm {E}\left[ \delta _{2,s}\sum _{l=j+1}^k (\delta _{1,l}+\delta _{2,l})\right] }{p_s(1-p_j)}+ \frac{\mathrm {E}\left[ \sum _{l=j+1}^k (\delta _{1,l}+\delta _{2,l})\sum _{l=s+1}^k (\delta _{1,l}+\delta _{2,l})\right] }{(1-p_j)(1-p_s)}=0. \end{aligned}$$

Lastly, for \(j=s\)

$$\begin{aligned} \frac{\mathrm {E}\left[ \delta _{2,j}^2\right] }{p_j^2}+\frac{\mathrm {E}\left[ \sum _{l=j+1}^k (\delta _{1,l}+\delta _{2,l})^2\right] }{(1-p_j)^2}= & {} \frac{\mathrm {E}\left[ \delta _{2,j}\right] }{p_j^2}+\frac{\mathrm {E}\left[ \sum _{l=j+1}^k (\delta _{1,l}+\delta _{2,l})\right] }{(1-p_j)^2}\\= & {} \prod _{l=0}^{j-1}(1-p_l)(1-q_l)\frac{(1-q_j)}{p_j(1-p_j)}. \end{aligned}$$

Hence we have

Hence the proof of Lemma 2 is complete. \(\square \)

Proof of Result 1

Let \(\nabla _{\varvec{\theta }}F(T_j)=\left( \frac{\partial F(T_j)}{\partial \theta _1}, \frac{\partial F(T_j)}{\partial \theta _2}, \ldots , \frac{\partial F(T_j)}{\partial \theta _m}\right) ^T\) for \(j=1,2,\ldots ,k\). By using Lemma 2 and the assumption that items are independent and identically distributed, we have

where \(\mathrm {E}[N_j]=n \prod _{l=0}^{j-1}(1-p_l)(1-q_l)\) and \(p_0=q_0=0\).

Similarly for \(j,s=1,2,\ldots ,k-1\), we have

where ,    for \(j=1,2,\ldots ,k-1\).

Hence Result 1 is proved.

Next we prove the lemmas 3 to 5 necessary to establish the consistency and asymptotic properties of the MLE of \(\varvec{\theta }\).

Proof of Lemma 3

By (1), we have \(\mathrm {E}[N_j]=n\prod _{l=0}^{j-1}(1-p_l)(1-q_l)\). Thus, \( \mathrm {E}\left[ \frac{N_j}{n}\right] =\prod _{l=0}^{j-1}(1-p_l)(1-q_l) =b_j\), say. It is clear that \(b_j < \infty \). Now, it is enough to show that the variance of \(N_j/n\) tends to zero as \(n \rightarrow \infty \). We have

$$\begin{aligned} \text{ Var }\left( \frac{N_j}{n}\right) =\mathrm {E}\left[ \left( \frac{N_j}{n}\right) ^2\right] -\mathrm {E}^2\left[ \frac{N_j}{n}\right]= & {} \frac{1}{n} \sum _{l=0}^{j-1} \prod _{l'=1}^{l} (1-p_{l'})(1-q_{l'}) (1-p_l)q_l. \end{aligned}$$

Thus, \( \text{ Var }\left( \frac{N_j}{n}\right) \rightarrow 0\), when \(n \rightarrow \infty \). Hence the proof.

Proof of Lemma 4 (i)

From Result 1, the \((u,v)^{th}\) element of is given by

For \(u=1,2,\ldots ,m\), we define

$$\begin{aligned} \zeta _1= & {} \max _{1\le j\le k} \sup _{\varvec{\theta } \in \Theta } \frac{1}{F(T_j)-F(T_{j-1})} . \end{aligned}$$
(25)
$$\begin{aligned} \zeta _2= & {} \max _{1\le j\le k} \sup _{\varvec{\theta } \in \Theta } \frac{1}{1-F(T_j)} . \end{aligned}$$
(26)
$$\begin{aligned} A_u= & {} \max _{1\le j\le k} \sup _{\varvec{\theta } \in \Theta } \left| \frac{\partial F(T_j)}{\partial \theta _u}\right| . \end{aligned}$$
(27)

Then

Also, for fixed n, \(\mathrm {E} [N_j] <n\). Lastly, for all \( \varvec{\theta } \in \Theta \), by using regularity condition V(a), we have \(0<\frac{1}{q_j(1-q_j)}<\infty \) and number of inspections k is finite. Hence, .

Proof of Lemma 4 (ii)

Note that is the variance-covariance matrix of the score vector \(\varvec{\theta }\), it is a symmetric and non-negative definite. Consider a vector \({\mathbf {a}}(\varvec{\theta })\ne 0\). Then we have

Note that \(0<q_j<1\) and \(\mathrm {E} [N_j]>0\) for \(j=1,2,\ldots ,k\). Also, by the regularity condition V(b), \({\mathbf {a}}(\varvec{\theta })^T \nabla _{\varvec{\theta }}\left( F(T_j)-F(T_{j-1})\right) \ne 0\) and \({\mathbf {a}}(\varvec{\theta })^T\nabla _{\varvec{\theta }}F(T_j) \ne 0\). This implies for \({\mathbf {a}}(\varvec{\theta })\ne 0\). Hence is a positive definite matrix for all \(\varvec{\theta } \in \Theta _0\).

Proof of Lemma 5

For \(u,v,w=1,2,\ldots ,m\), we define

$$\begin{aligned} A_{uv}= & {} \max _{1\le j\le k} \sup _{\varvec{\theta } \in \Theta } \left| \frac{\partial ^2 F(T_j)}{\partial \theta _u \partial \theta _v} \right| . \end{aligned}$$
(28)
$$\begin{aligned} A_{uvw}= & {} \max _{1\le j\le k} \sup _{\varvec{\theta } \in \Theta } \left| \frac{\partial ^3 F(T_j)}{\partial \theta _u \partial \theta _v \partial \theta _w}\right| . \end{aligned}$$
(29)

By using Eqs. (25)–(27), (28) and (29), it can be easily shown that

Now

where

Proof of Lemma 6 (i)

We show that for \(j,j'=1,2,\ldots ,k-1\). Note that for \(j\ne j'\) and when \(j'=j\). Hence each element of the Fisher Information matrix is finite for all \(\left( \varvec{\theta }, \varvec{p}\right) \in \Theta ^*\).

Proof of Lemma 6 (ii)

We prove that is a positive definite matrix. A diagonal matrix is positive definite if and only if all its element are positive. The jth component of is for \(j=1,2,\ldots ,k-1\). Hence is positive definite.

Proof of Lemma 7

Given that the regularity condition (VI) holds, we define

$$\begin{aligned} p_{1_j}^*= & {} \underset{p_j \in \Theta _{p_j} }{sup} \frac{1}{p_j} \text{ and } p_{2_j}^* =\underset{p_j \in \Theta _{p_j}}{sup}\frac{1}{1-p_j}. \end{aligned}$$
(30)

By using and equation (30), we have

Now

where

Hence the proof of the lemma is complete. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Budhiraja, S., Pradhan, B. Point and interval estimation under progressive type-I interval censoring with random removal. Stat Papers 61, 445–477 (2020). https://doi.org/10.1007/s00362-017-0948-y

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-017-0948-y

Keywords

Navigation