Skip to main content
Log in

Estimating the survival function based on the semi-Markov model for dependent censoring

  • Published:
Lifetime Data Analysis Aims and scope Submit manuscript

Abstract

In this paper, we study a nonparametric maximum likelihood estimator (NPMLE) of the survival function based on a semi-Markov model under dependent censoring. We show that the NPMLE is asymptotically normal and achieves asymptotic nonparametric efficiency. We also provide a uniformly consistent estimator of the corresponding asymptotic covariance function based on an information operator. The finite-sample performance of the proposed NPMLE is examined with simulation studies, which show that the NPMLE has smaller mean squared error than the existing estimators and its corresponding pointwise confidence intervals have reasonable coverages. A real example is also presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Andersen PK, Borgan O, Gill RD, Keiding N (1993) Statistical models based on counting processes. Springer, New York

    Book  MATH  Google Scholar 

  • Datta S, Satten GA, Datta S (2000) Nonparametric estimation for the three-stage irreversible illness-death model. Biometrics 56:841–847

    Article  MATH  Google Scholar 

  • Kiefer J, Wolfowitz J (1956) Consistency of the maximum likelihood estimator in the presence of infinitely many incidental parameters. Ann Math Stat 27(4):887–906

    Article  MathSciNet  MATH  Google Scholar 

  • Lagakos SW, Williams JS (1978) Models for censored survival analysis: a cone class of variable-sum models. Biometrika 65(1):181–189

    Article  MATH  Google Scholar 

  • Lee SY, Tsai WY (2005) An estimator of the survival function based on the semi-markov model under dependent censorship. Lifetime Data Anal 11:193–211

    Article  MathSciNet  MATH  Google Scholar 

  • Lee SY, Wolfe RA (1998) A simple test for independent censoring under the proportional hazards model. Biometrics 54:1176–1182

    Article  MATH  Google Scholar 

  • Scholz FW (1980) Towards a unified definition of maximum likelihood. Can J Stat 8:193–203

    Article  MathSciNet  MATH  Google Scholar 

  • Tsiatis AA (1975) A nonidentifiability aspect of the problem of competing risks. Proc Natl Acad Sci USA 72:20–22

    Article  MathSciNet  MATH  Google Scholar 

  • Van der Vaart AW, Wellner JA (1996) Weak convergence and empirical processes: with applications to statistics. Springer, New York

    Book  MATH  Google Scholar 

Download references

Acknowledgments

The research is supported by the National Natural Science Foundation of China (11271081) and the Student Growth Fund Scholarship of School of Management, Fudan University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhezhen Jin.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 261 KB)

Appendix

Appendix

1.1 Preliminary

Lemmas 1 and 2 provide related Donsker properties.

Lemma 1

The classes \(\left\{ \left( {x},\delta \right) \!\mapsto \!\delta \mathrm {I}\left\{ {x}\!\leqslant {t}\!\right\} :~{t}\geqslant {0}\right\} \) and \(\left\{ \left( {x},\delta \right) \!\mapsto \!\delta \mathrm {I}\left\{ {x}\!\geqslant {t}\!\right\} :~{t}\geqslant {0}\right\} \) are Donsker classes of functions on \(\mathbb {R}_{+}\times \left\{ {0},{1}\right\} \).

Proof

Combining Lemma 2.6.15 and Lemma 2.6.18(iii, vi) of Vaart and Wellner (1996), it can be shown that \(\left\{ \left( {x},\delta \right) \mapsto \delta \mathrm {I}\left\{ {x}\leqslant {t}\right\} :~{t}\geqslant {0}\right\} \) and \(\left\{ \left( {x},\delta \right) \mapsto \delta \mathrm {I}\left\{ {x}\geqslant {t}\right\} :{t}\geqslant {0}\right\} \) are VC-classes on \(\mathbb {R}_{+}\times \left\{ {0},{1}\right\} \). By Theorem 2.6.8 of Vaart and Wellner (1996), they are Donsker classes, where the measurability conditions could be verified via the denseness of the rational numbers.\(\square \)

Lemma 2

Let \(\eta \) be a positive real number and \(\varLambda \) be a nondecreasing cadlag function on \(\left[ {0},\eta \right] \). The class \(\left\{ \left( {x},\delta \right) \mapsto \int _{0}^{t}\delta \mathrm {I}\left\{ {x}\geqslant {y}\right\} ~\varLambda \left( d{y}\right) :~{t}\in \left[ {0},\eta \right] \right\} \) is a Donsker class of functions on \(\mathbb {R}_{+}\times \left\{ {0},{1}\right\} \).

Proof

The result is easy to see by combining Example 2.6.21 and Example 2.10.8 of Vaart and Wellner (1996). \(\square \)

Lemma 3

Let \(\eta \) be a positive real number and \(\mathcal {BV}\left( \left[ {0},\eta \right] \right) \) be the space of all cadlag functions defined on \(\left[ {0},\eta \right] \) whose total variation are bounded by \({2}\). For any \((\mathsf {F},\mathsf {G},\mathsf {H})\in \mathcal {BV}\left( \left[ {0},\eta \right] \right) \times \mathcal {BV}\left( \left[ {0},\eta \right] \right) \times \mathcal {BV}\left( \left[ {0},\eta \right] \right) \) and any \({t}\in \left[ {0},\eta \right] \), let

$$\begin{aligned} \phi \left( \mathsf {F},\mathsf {G},\mathsf {H}\right) \left[ {t}\right] =\mathsf {F}\left( {t}\right) \mathsf {G}\left( {t}\right) -\int _{\left[ {0},{t}\right] }\mathsf {G}\left( {u}\right) \mathsf {H}\left( {t}-{u}\right) \;\mathsf {F}\left( d{u}\right) . \end{aligned}$$

For any \(\mathsf {F}_{0},\mathsf {G}_{0},\mathsf {H}_{0}\in \mathcal {BV}\left( \left[ {0},\eta \right] \right) , \phi \) is Hadamard differentiable at \((\mathsf {F}_{0},\mathsf {G}_{0},\mathsf {H}_{0})\) with derivative \(\phi ^{\prime }(\mathsf {F}_{0},\mathsf {G}_{0},\mathsf {H}_{0})\), where for any \({t}\in \left[ {0},\eta \right] \),

$$\begin{aligned} \phi ^{\prime }\left( \beta _{0}\right) \left[ \beta \right] \left[ {t}\right]= & {} \mathsf {F}_{0}\left( {t}\right) \mathsf {G}\left( {t}\right) +\mathsf {F}\left( {t}\right) \mathsf {G}_{0}\left( {t}\right) -\int _{\left[ {0},{t}\right] }\mathsf {G}_{0}\left( {x}\right) \mathsf {H}_{0}\left( {t}-{x}\right) \mathsf {F}\left( dx\right) \\&-\,\int _{\left[ {0},{t}\right] }\mathsf {G}_{0}\left( {x}\right) \mathsf {H}\left( {t}-{x}\right) \mathsf {F}_{0}\left( dx\right) -\int _{\left[ {0},{t}\right] }\mathsf {G}\left( {x}\right) \mathsf {H}_{0}\left( {t}-{x}\right) \mathsf {F}_{0}\left( dx\right) , \end{aligned}$$

where \(\beta =(\mathsf {F},\mathsf {G},\mathsf {H})\) and \(\beta _{0}=(\mathsf {F}_{0},\mathsf {G}_{0},\mathsf {H}_{0})\).

Proof

Let \(\mathsf {F}_{0},\mathsf {G}_{0},\mathsf {H}_{0},\mathsf {F},\mathsf {G},\mathsf {H}\in \mathcal {BV}\left( \left[ {0},\eta \right] \right) , \{(\mathsf {F}_{m},\mathsf {G}_{m},\mathsf {H}_{m}):\;{m}\in \mathbb {N}\}\) be a sequence converging to \((\mathsf {F},\mathsf {G},\mathsf {H})\) in \(\mathcal {BV}\left( \left[ {0},\eta \right] \right) \times \mathcal {BV}\left( \left[ {0},\eta \right] \right) \times \mathcal {BV}\left( \left[ {0},\eta \right] \right) \) and \(\{{h}_{m}:\;{m}\in \mathbb {N}\}\) be a sequence of real numbers converging to \(0\).

Denote \(\beta _{0}=(\mathsf {F}_{0},\mathsf {G}_{0},\mathsf {H}_{0})\) and \(\beta _{m}=\beta _{0}+{h}_{m}(\mathsf {F},\mathsf {G},\mathsf {H})\).

Note that for any \({t}\in \left[ {0},\eta \right] , \phi (\beta _{m})[t]=\phi (\beta _{0})[t]+{h}_{m}\varGamma _{m}({t})+{h}_{m}^{2}\mathsf {W}_{m}({t})\), where

$$\begin{aligned} \varGamma _{m}\left( {t}\right)= & {} \mathsf {F}_{0}\left( {t}\right) \mathsf {G}_{m}\left( {t}\right) +\mathsf {F}_{m}\left( {t}\right) \mathsf {G}_{0}\left( {t}\right) -\int _{\left[ {0},{t}\right] }\mathsf {G}_{0}\left( {x}\right) \mathsf {H}_{0}\left( {t}-{x}\right) \mathsf {F}_{m}\left( dx\right) \\&-\,\int _{\left[ {0},{t}\right] }\mathsf {G}_{0}\left( {x}\right) \mathsf {H}_{m}\left( {t}-{x}\right) \mathsf {F}_{0}\left( dx\right) -\int _{\left[ {0},{t}\right] }\mathsf {G}_{m}\left( {x}\right) \mathsf {H}_{0}\left( {t}-{x}\right) \mathsf {F}_{0}\left( dx\right) ,\\ \mathsf {W}_{m}\left( {t}\right)= & {} \mathsf {F}_{m}\left( {t}\right) \mathsf {G}_{m}\left( {t}\right) -\int _{\left[ {0},{t}\right] }\mathsf {H}_{0}\left( {t}-{x}\right) \mathsf {G}_{m}\left( {x}\right) \mathsf {F}_{m}\left( dx\right) \\&-\,\int _{\left[ {0},{t}\right] }\mathsf {G}_{0}\left( {x}\right) \mathsf {H}_{m}\left( {t}-{x}\right) \mathsf {F}_{m}\left( dx\right) -\int _{\left[ {0},{t}\right] }\mathsf {G}_{m}\left( {x}\right) \mathsf {H}_{m}\left( {t}-{x}\right) \mathsf {F}_{0}\left( dx\right) \\&-\,{h}_{m}\int _{\left[ {0},{t}\right] }\mathsf {G}_{m}\left( {x}\right) \mathsf {H}_{m}\left( {t}-{x}\right) \mathsf {F}_{m}\left( dx\right) . \end{aligned}$$

For any \({t}\in \left[ {0},\eta \right] \), let

$$\begin{aligned} \varGamma _{0}\left( {t}\right)= & {} \mathsf {F}_{0}\left( {t}\right) \mathsf {G}\left( {t}\right) +\mathsf {F}\left( {t}\right) \mathsf {G}_{0}\left( {t}\right) -\int _{\left[ {0},{t}\right] }\mathsf {G}_{0}\left( {x}\right) \mathsf {H}_{0}\left( {t}-{x}\right) \mathsf {F}\left( dx\right) \\&-\,\int _{\left[ {0},{t}\right] }\mathsf {G}_{0}\left( {x}\right) \mathsf {H}\left( {t}-{x}\right) \mathsf {F}_{0}\left( dx\right) -\int _{\left[ {0},{t}\right] }\mathsf {G}\left( {x}\right) \mathsf {H}_{0}\left( {t}-{x}\right) \mathsf {F}_{0}\left( dx\right) . \end{aligned}$$

Note that for any \({t}\in \left[ {0},\eta \right] , |\mathsf {W}_{m}({t})|\leqslant {28}+{8}{h}_{m}\) and

$$\begin{aligned} \left| \varGamma _{m}\left( {t}\right) -\varGamma _{0}\left( {t}\right) \right|\leqslant & {} {6}\sup _{{s}\in \left[ {0},\eta \right] }\left| \mathsf {F}_{m}\left( {s}\right) -\mathsf {F}\left( {s}\right) \right| +{6}\sup _{{s}\in \left[ {0},\eta \right] }\left| \mathsf {G}_{m}\left( {s}\right) -\mathsf {G}\left( {s}\right) \right| \\&+\,{4}\sup _{{s}\in \left[ {0},\eta \right] }\left| \mathsf {H}_{m}\left( {s}\right) -\mathsf {H}\left( {s}\right) \right| . \end{aligned}$$

Hence, as \({m}\rightarrow \infty , \sup _{{t}\in \left[ {0},\eta \right] }|\mathsf {W}_{m}({t})|={O}(1)\) and \(\sup _{{t}\in \left[ {0},\eta \right] }|\varGamma _{m}({t})-\varGamma _{0}({t})|={o}(1)\).

Therefore, \(\phi \) is Hadamard differentiable at \(\beta _{0}\) and for any \({t}\in \left[ {0},\eta \right] \),

$$\begin{aligned} \phi ^{\prime }\left( \beta _{0}\right) \left[ \beta \right] \left[ {t}\right]= & {} \mathsf {F}_{0}\left( {t}\right) \mathsf {G}\left( {t}\right) +\mathsf {F}\left( {t}\right) \mathsf {G}_{0}\left( {t}\right) -\int _{\left[ {0},{t}\right] }\mathsf {G}_{0}\left( {x}\right) \mathsf {H}_{0}\left( {t}-{x}\right) \mathsf {F}\left( dx\right) \\&-\,\int _{\left[ {0},{t}\right] }\mathsf {G}_{0}\left( {x}\right) \mathsf {H}\left( {t}-{x}\right) \mathsf {F}_{0}\left( dx\right) -\int _{\left[ {0},{t}\right] }\mathsf {G}\left( {x}\right) \mathsf {H}_{0}\left( {t}-{x}\right) \mathsf {F}_{0}\left( dx\right) , \end{aligned}$$

where \(\beta =(\mathsf {F},\mathsf {G},\mathsf {H})\).\(\square \)

1.2 Proof of Theorems 1 to 3

For any \(\left( {i},{j}\right) \in \left\{ \left( {0},{1}\right) ,\left( {0},{2}\right) ,\left( {1},{2}\right) \right\} \) and any \({t}\geqslant {0}\), define

$$\begin{aligned} \overline{\mathsf {M}}_{n}^{\left( {i},{j}\right) }\left( {t}\right) =\overline{\mathsf {N}}_{n}^{\left( {i},{j}\right) }\left( {t}\right) -\int _{\left[ {0},{t}\right] }\overline{\mathsf {Y}}_{n}^{\left( {i}\right) }\left( {y}\right) \;\varLambda _{0}^{\left( {i},{j}\right) }\left( d{y}\right) . \end{aligned}$$

Proof

Combining Theorem 2.10.6 of Vaart and Wellner (1996) with Lemmas 1 and 2, it can be shown that, as \({n}\rightarrow \infty \),

$$\begin{aligned} {n}^{-{1}/{2}}\left( \overline{\mathsf {M}}_{n}^{\left( {0},{1}\right) },\overline{\mathsf {M}}_{n}^{\left( {0},{2}\right) },\overline{\mathsf {M}}_{n}^{\left( {1},{2}\right) },\overline{\mathsf {Y}}_{n}^{\left( {0}\right) }-\mathsf {E}\overline{\mathsf {Y}}_{n}^{\left( {0}\right) },\overline{\mathsf {Y}}_{n}^{\left( {1}\right) }-\mathsf {E}\overline{\mathsf {Y}}_{n}^{\left( {1}\right) }\right) \end{aligned}$$

weakly converges to a tight zero mean Gaussian process in

$$\begin{aligned} \ell _{5}^{\infty }\left( \left[ {0},\tau \right] \right) =\ell ^{\infty }\left( \left[ {0},\tau \right] \right) \times \ell ^{\infty }\left( \left[ {0},\tau \right] \right) \times \ell ^{\infty }\left( \left[ {0},\tau \right] \right) \times \ell ^{\infty }\left( \left[ {0},\tau \right] \right) \times \ell ^{\infty }\left( \left[ {0},\tau \right] \right) . \end{aligned}$$

Combining this with Lemma 3.9.17, Lemma 3.9.25, and Theorem 3.9.4 of Vaart and Wellner (1996), for any \(\left( {i},{j}\right) \in \left\{ \left( {0},{1}\right) ,\left( {0},{2}\right) ,\left( {1},{2}\right) \right\} \),

$$\begin{aligned} \sup _{{t}\in \left[ {0},\tau \right] }\left| \hat{\varLambda }_{n}^{\left( {i},{j}\right) }\left( {t}\right) -\varLambda _{0}^{\left( {i},{j}\right) }\left( {t}\right) -\int _{\left[ {0},{t}\right] }\left( \mathsf {L}_{0}^{\left( {i}\right) }\left( {y}-\right) \right) ^{-1}\;{n}^{-1}\overline{\mathsf {M}}_{n}^{\left( {i},{j}\right) }\left( d{y}\right) \right| ={o}_{p}\left( {n}^{-{1}/{2}}\right) ,\nonumber \\ \end{aligned}$$
(10)

By the continuity and the linearity of the integral operator, as \({n}\rightarrow \infty \),

$$\begin{aligned} {n}^{{1}/{2}}\left( \hat{\varLambda }_{n}^{\left( {0},{1}\right) }-\varLambda _{0}^{\left( {0},{1}\right) },\hat{\varLambda }_{n}^{\left( {0},{2}\right) }-\varLambda _{0}^{\left( {0},{2}\right) },\hat{\varLambda }_{n}^{\left( {1},{2}\right) }-\varLambda _{0}^{\left( {1},{2}\right) }\right) \end{aligned}$$

weakly converges to a tight Gaussian process in \(\ell _{3}^{\infty }\left( \left[ {0},\tau \right] \right) \) with covariance function \(\mathcal {U}_{0}\).

By Lemma 3.9.30 and Theorem 3.9.4 of Vaart and Wellner (1996), as \({n}\rightarrow \infty \),

$$\begin{aligned} {n}^{{1}/{2}}\left( \hat{\mathsf {S}}_{n}^{\left( {0},{1}\right) }-\mathsf {S}_{0}^{\left( {0},{1}\right) },\hat{\mathsf {S}}_{n}^{\left( {0},{2}\right) }-\mathsf {S}_{0}^{\left( {0},{2}\right) },\hat{\mathsf {S}}_{n}^{\left( {1},{2}\right) }-\mathsf {S}_{0}^{\left( {1},{2}\right) }\right) \end{aligned}$$

weakly converges to a tight Gaussian process in \(\ell _{3}^{\infty }([{0},\tau ])\) with covariance function \(\mathcal {V}_{0}\).

Combining Lemma 3 with Theorem 3.9.4 of Vaart and Wellner (1996),

$$\begin{aligned} \sup \limits _{t\in \left[ {0},\tau \right] }\left| \hat{\mathsf {S}}_{n}\left( {t}\right) -\mathsf {S}_{0}\left( {t}\right) -\phi ^{\prime }\left( \beta _{0}\right) \left[ \hat{\beta }_{n}-\beta _{0}\right] \left[ {t}\right] \right| ={o}_{p}\left( {n}^{-1/2}\right) , \end{aligned}$$

where \(\beta _{0}=\left( \mathsf {S}_{0}^{\left( {0},{1}\right) },\mathsf {S}_{0}^{\left( {0},{2}\right) },\mathsf {S}_{0}^{\left( {1},{2}\right) }\right) \) and \(\hat{\beta }_{n}=\left( \hat{\mathsf {S}}_{n}^{\left( {0},{1}\right) },\hat{\mathsf {S}}_{n}^{\left( {0},{2}\right) },\hat{\mathsf {S}}_{n}^{\left( {1},{2}\right) }\right) \).

Therefore, as \({n}\rightarrow \infty , {n}^{{1}/{2}}(\hat{\mathsf {S}}_{n}-\mathsf {S}_{0})\) weakly converges to a tight zero mean Gaussian process in \(\ell ^{\infty }\left( \left[ {0},\tau \right] \right) \) with covariance function \(\varOmega _{0}\).\(\square \)

1.3 Proof of Theorems 4 and 5

Proof

To obtain the efficiency, we will verify the conditions of Theorem VIII.3.2 and Theorem VIII.3.3 of Andersen et al. (1993).

First, we verify the local asymptotic normality (LAN) Assumption (Assumption VIII.3.1 of Andersen et al. 1993).

By the Taylor expansion, it is easy to show that for any \({h}=({h}^{\left( {0},{1}\right) },{h}^{\left( {0},{2}\right) },{h}^{\left( {1},{2}\right) })\in \mathbb {H}\),

$$\begin{aligned} \log \mathcal {R}_{n}\left( {h}\right) =\mathcal {S}_{n}^{\left( {0},{1}\right) }\left( {h}\right) +\mathcal {S}_{n}^{\left( {0},{2}\right) }\left( {h}\right) +\mathcal {S}_{n}^{\left( {1},{2}\right) }\left( {h}\right) -\dfrac{1}{2}\left\| {h}\right\| _{\mathbb {H}}^{2}+{o}_{p}\left( {1}\right) , \end{aligned}$$
(11)

where for any \(\left( {i},{j}\right) \in \left\{ \left( {0},{1}\right) ,\left( {0},{2}\right) ,\left( {1},{2}\right) \right\} \) and any \({h}=({h}^{\left( {0},{1}\right) },{h}^{\left( {0},{2}\right) },{h}^{\left( {1},{2}\right) })\in \mathbb {H}\),

$$\begin{aligned} \mathcal {S}_{n}^{\left( {i},{j}\right) }\left( {h}\right) ={n}^{-{1}/{2}}\int _{\left[ {0},\tau \right] }{h}^{\left( {i},{j}\right) }\left( {y}\right) \left( {1}-\varLambda _{0}^{\left( {i},{j}\right) }\left\{ {y}\right\} \right) ^{-1}\;\overline{\mathsf {M}}_{n}^{\left( {i},{j}\right) }\left( d{y}\right) . \end{aligned}$$

Hence, for any \({m}\in \mathbb {N}^{+}\) and any \({h}_{1},\ldots ,{h}_{m}\in \mathbb {H}\),

$$\begin{aligned} \left( \log \mathcal {R}_{n}\left( {h}_{1}\right) ,\ldots ,\log \mathcal {R}_{n}\left( {h}_{m}\right) \right) \end{aligned}$$

weakly converges to a Gaussian random vector in \(\mathbb {R}^{m}\) with mean

$$\begin{aligned} -\dfrac{1}{2}\left( \left\| {h}_{1}\right\| _{\mathbb {H}},\ldots ,\left\| {h}_{m}\right\| _{\mathbb {H}}\right) \end{aligned}$$

and covariance matrix

$$\begin{aligned} \begin{pmatrix} \left<{h}_{1},{h}_{1}\right>_{\mathbb {H}}&{}\quad \ldots &{}\quad \left<{h}_{1},{h}_{m}\right>_{\mathbb {H}}\\ \vdots &{}\ddots &{}\vdots \\ \left<{h}_{m},{h}_{1}\right>_{\mathbb {H}}&{}\quad \cdots &{}\quad \left<{h}_{m},{h}_{m}\right>_{\mathbb {H}}\\ \end{pmatrix}. \end{aligned}$$

Next, we verify the Differentiability Assumption (Assumption VIII.3.2 of Andersen et al. 1993).

For any \(\left( {i},{j}\right) \in \left\{ \left( {0},{1}\right) ,\left( {0},{2}\right) ,\left( {1},{2}\right) \right\} \), any \({h}=({h}^{\left( {0},{1}\right) },{h}^{\left( {0},{2}\right) },{h}^{\left( {1},{2}\right) })\in \mathbb {H}\), any \({t}\in \left[ {0},\tau \right] \),

$$\begin{aligned} {n}^{{1}/{2}}\left( \varLambda _{n,h}^{\left( {i},{j}\right) }\left( {t}\right) -\varLambda _{0}^{\left( {i},{j}\right) }\left( {t}\right) \right) =\int _{\left[ {0},{t}\right] }{h}^{\left( {i},{j}\right) }\left( {y}\right) \;\varLambda _{0}^{\left( {i},{j}\right) }\left( d{y}\right) . \end{aligned}$$

Note that, for any \(\left( {i},{j}\right) \in \left\{ \left( {0},{1}\right) ,\left( {0},{2}\right) ,\left( {1},{2}\right) \right\} \), any \({h}=({h}^{\left( {0},{1}\right) },{h}^{\left( {0},{2}\right) },{h}^{\left( {1},{2}\right) })\in \mathbb {H}\), and any \({t}\in \left[ {0},\tau \right] \),

$$\begin{aligned} \left| \int _{\left[ {0},{t}\right] }{h}^{\left( {i},{j}\right) }\left( {y}\right) \;\varLambda _{0}^{\left( {i},{j}\right) }\left( d{y}\right) \right| \leqslant \left\| {h}\right\| _{\mathbb {H}}\varLambda _{0}^{\left( {i},{j}\right) }\left( \tau \right) \left( \mathsf {L}_{0}^{\left( {i}\right) }\left( \tau -\right) \right) ^{-1}. \end{aligned}$$

Hence, the Differentiability Assumption is verified.

Next, we show that \(\left( \hat{\varLambda }_{n}^{\left( {0},{1}\right) },\hat{\varLambda }_{n}^{\left( {0},{2}\right) },\hat{\varLambda }_{n}^{\left( {1},{2}\right) }\right) \) is regular.

Recall the weak convergence of the log-likelihood ratio \(\log \mathcal {R}_{n}\left( {h}\right) \). The continuity follows from the Le Cam’s third Lemma and the weak convergence can be shown via similar arguments in the previous subsection. Hence, the regularity follows.

It remains to check Equation (8.3.5) of Andersen et al. (1993) for each coordinate.

For any \({t}\in \left[ {0},\tau \right] \), define

$$\begin{aligned} \gamma _{t}^{\left( {0},{1}\right) }=\left( \varphi _{t}^{\left( {0},{1}\right) },{0},{0}\right) ,\; \gamma _{t}^{\left( {0},{2}\right) }=\left( {0},\varphi _{t}^{\left( {0},{2}\right) },{0}\right) ,\; \gamma _{t}^{\left( {1},{2}\right) }=\left( {0},{0},\varphi _{t}^{\left( {1},{2}\right) }\right) \end{aligned}$$

where for any \(\left( {i},{j}\right) \in \left\{ \left( {0},{1}\right) ,\left( {0},{2}\right) ,\left( {1},{2}\right) \right\} \) and any \({s}\in \left[ {0},\tau \right] \),

$$\begin{aligned} \varphi _{t}^{\left( {i},{j}\right) }\left( {s}\right) =\mathrm {I}\left\{ {s}\leqslant {t}\right\} \mathsf {L}_{0}^{\left( {i}\right) }\left( {s}-\right) ^{-1}\left( {1}-\varLambda _{0}^{\left( {i},{j}\right) }\left\{ {y}\right\} \right) . \end{aligned}$$

By (10) and (11), for any \(\left( {i},{j}\right) \in \left\{ \left( {0},{1}\right) ,\left( {0},{2}\right) ,\left( {1},{2}\right) \right\} \) and any \({t}\in \left[ {0},\tau \right] \),

$$\begin{aligned} \hat{\varLambda }_{n}^{\left( {i},{j}\right) }\left( {t}\right) -\varLambda _{0}^{\left( {i},{j}\right) }\left( {t}\right) -{n}^{-{1}/{2}}\left( \log \mathcal {R}_{n}\left( \gamma _{t}^{\left( {i},{j}\right) }\right) +\dfrac{1}{2}\left\| {h}\right\| _{\mathbb {H}}^{2}\right) ={o}_{p}\left( {n}^{-{1}/{2}}\right) . \end{aligned}$$

Therefore, combining all the above results with Theorem VIII.3.2 and Theorem VIII.3.3 of Andersen et al. (1993), it follows that \(\left( \hat{\varLambda }_{n}^{\left( {0},{1}\right) },\hat{\varLambda }_{n}^{\left( {0},{2}\right) },\hat{\varLambda }_{n}^{\left( {1},{2}\right) }\right) \) is asymptotically efficient.

Combining Theorem VIII.3.4 of Andersen et al. (1993) with Lemma 3, the efficiency of \(\hat{\mathsf {S}}_{n}\) follows.\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, Z., Zheng, M. & Jin, Z. Estimating the survival function based on the semi-Markov model for dependent censoring. Lifetime Data Anal 22, 161–190 (2016). https://doi.org/10.1007/s10985-015-9325-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10985-015-9325-0

Keywords

Navigation