Skip to main content
Log in

Nonparametric latency estimation for mixture cure models

  • Original Paper
  • Published:
TEST Aims and scope Submit manuscript

Abstract

A nonparametric latency estimator for mixture cure models is studied in this paper. An i.i.d. representation is obtained, the asymptotic mean squared error of the latency estimator is found, and its asymptotic normality is proven. A bootstrap bandwidth selection method is introduced and its efficiency is evaluated in a simulation study. The proposed methods are applied to a dataset of colorectal cancer patients in the University Hospital of A Coruña (CHUAC).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Arcones MA (1997) The law of the iterated logarithm for a triangular array of empirical processes. Electron J Probab 2:1–39

  • Beran R (1981) Nonparametric regression with randomly censored survival data. Technical Report, University of California, Berkeley

  • Billingsley P (1968) Convergence of probability measures. Wiley, New York

    MATH  Google Scholar 

  • Boag JW (1949) Maximum likelihood estimates of the proportion of patients cured by cancer therapy. J R Stat Soc B Met 11:15–53

    MATH  Google Scholar 

  • Chappell R, Nondahl DM, Fowler JF (1995) Modeling dose and local control in radiotherapy. J Am Stat Assoc 90:829–838

    Article  MATH  Google Scholar 

  • Farewell VT (1982) The use of mixture models for the analysis of survival data with long-term survivors. Biometrics 38:1041–1046

    Article  Google Scholar 

  • Farewell VT (1986) Mixture models in survival analysis: are they worth the risk? Can J Stat 14:257–262

    Article  MathSciNet  Google Scholar 

  • Goldman AI (1984) Survivorship analysis when cure is a possibility: a Monte Carlo study. Stat Med 3:153–163

    Article  Google Scholar 

  • González-Manteiga W, Crujeiras RM (2013) An updated review of Goodness-of-Fit tests for regression models (with discussions and rejoinder). TEST 22:361–447

    Article  MathSciNet  MATH  Google Scholar 

  • Iglesias-Pérez MC, González-Manteiga W (1999) Strong representation of a generalized product-limit estimator for truncated and censored data with some applications. J Nonparametr Stat 10:213–244

    Article  MathSciNet  MATH  Google Scholar 

  • Kuk AYC, Chen CH (1992) A mixture model combining logistic regression with proportional hazards regression. Biometrika 79:531–541

    Article  MATH  Google Scholar 

  • Laska EM, Meisner MJ (1992) Nonparametric estimation and testing in a cure model. Biometrics 48:1223–1234

    Article  Google Scholar 

  • López-Cheda A, Cao R, Jácome MA, Van Keilegom I (2017) Nonparametric incidence estimation and bootstrap bandwidth selection in mixture cure models. Comput Stat Data Anal 105:144–165

    Article  MathSciNet  Google Scholar 

  • Louzada F, Cobre J (2012) A multiple time scale survival model with a cure fraction. TEST 21:355–368

    Article  MathSciNet  MATH  Google Scholar 

  • Maller RA, Zhou S (1992) Estimating the proportion of immunes in a censored sample. Biometrika 79:731–739

    Article  MathSciNet  MATH  Google Scholar 

  • Maller RA, Zhou S (1996) Survival analysis with long-term survivors. Wiley, Chichester

    MATH  Google Scholar 

  • Peng Y (2003) Fitting semiparametric cure models. Comput Stat Data Anal 41:481–490

    Article  MathSciNet  MATH  Google Scholar 

  • Peng Y, Dear KB (2000) A nonparametric mixture model for cure rate estimation. Biometrics 56:237–243

    Article  MATH  Google Scholar 

  • Sposto R, Sather HN, Baker SA (1992) A comparison of tests of the difference in the proportion of patients who are cured. Biometrics 48:87–99

    Article  Google Scholar 

  • Sy JP, Taylor JMG (2000) Estimation in a Cox proportional hazards cure model. Biometrics 56:227–236

    Article  MathSciNet  MATH  Google Scholar 

  • Taylor JMG (1995) Semi-parametric estimation in failure time mixture models. Biometrics 51:899–907

    Article  MATH  Google Scholar 

  • Wang L, Du P, Lian H (2012) Two-component mixture cure rate model with spline estimated nonparametric components. Biometrics 68:726–735

    Article  MathSciNet  MATH  Google Scholar 

  • Xu J, Peng Y (2014) Nonparametric cure rate estimation with covariates. Can J Stat 42:1–17

    Article  MathSciNet  MATH  Google Scholar 

  • Yu B, Peng Y (2008) Mixture cure models for multivariate survival data. Comput Stat Data Anal 52:1524–1532

Download references

Acknowledgements

The first author’s research was sponsored by the Spanish FPU (Formación de Profesorado Universitario) Grant from MECD (Ministerio de Educación, Cultura y Deporte) with reference FPU13/01371. All the authors acknowledge partial support by the MINECO (Ministerio de Economía y Competitividad) grant MTM2014-52876-R (EU ERDF support included), the MICINN (Ministerio de Ciencia e Innovación) Grant MTM2011-22392 (EU ERDF support included) and Xunta de Galicia GRC Grant CN2012/130. The authors are grateful to Dr. Sonia Pértega and Dr. Salvador Pita, at the University Hospital of A Coruña, for providing the colorectal cancer data set.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ana López-Cheda.

Appendix

Appendix

Proof of Theorem 1

The nonparametric estimator of \(S_0(t|x)\) in (4) can be decomposed as follows:

$$\begin{aligned} \hat{S}_{0,h}(t|x)-S_{0}(t|x)=A_{11}+ A_{21} +A_{12}+ A_{22}, \end{aligned}$$
(19)

where the dominant terms of the i.i.d. representation of \(\hat{S}_{0,h}(t|x)\) derive from

$$\begin{aligned} A_{11} =\frac{\hat{S}_{h}(t|x)-S(t|x)}{p(x)}\quad \text { and } \quad A_{21} =\frac{1-S(t|x)}{p^{2}(x)}(\hat{p}_{h}(x)-p(x)), \end{aligned}$$
(20)

and the remaining terms

$$\begin{aligned} A_{12} =\frac{(\hat{S}_{h}(t|x)-S(t|x))(p(x)-\hat{p}_{h}(x))}{\hat{p}_{h}(x)p(x)}\quad \text { and }\quad A_{22}= \frac{S(t|x)-1}{p^2(x)}\frac{\left( \hat{p} _{h}(x)-p(x)\right) ^{2}}{\hat{p}_{h}(x)} \end{aligned}$$
(21)

will be proved to be negligible.

The i.i.d. representation of the term \(A_{11}\) in (20) follows, under assumptions (A1)–(A7), (A11) and (A12), from that of \(\hat{S}_h(t|x)\) in Theorem 2 of Iglesias-Pérez and González-Manteiga (1999):

$$\begin{aligned} A_{11}=-\frac{S(t|x)}{p(x)} \sum _{i=1}^{n}\tilde{B}_{h,i}(x)\xi (T_{i},\delta _{i},t,x)+O\left( \left( \frac{\ln n}{nh}\right) ^{3/4}\right) \text { a.s.} \end{aligned}$$
(22)

Under assumptions (A1)–(A12), the dominant terms of the i.i.d. representation of \(A_{21}\) in (20) come from the i.i.d. representation of \(\hat{p}_h(x)\) in Theorem 3 of López-Cheda et al. (2017):

$$\begin{aligned} A_{21}= -\frac{(1-S(t|x))}{p^{2}(x)}(1-p(x))\sum _{i=1}^{n} \tilde{B}_{h,i}(x)\xi (T_{i},\delta _{i},\infty ,x) +O\left( \left( \frac{\ln n}{nh}\right) ^{3/4}\right) \text { a.s.} \end{aligned}$$
(23)

We continue by proving the negligibility of \(A_{12}\) in (21). Under assumptions (A3a), (A4), (A5) and (A11), we apply Lemma 5 in Iglesias-Pérez and González-Manteiga (1999) to obtain

$$\begin{aligned} \hat{S}_{h}(t|x)-S(t|x)=O\left( \sqrt{\frac{\ln \ln n}{nh}}+h^{2}\right) \text { a.s.} \end{aligned}$$

and, similarly from Theorem 3.3 in Arcones (1997) and the Strong Law of Large Numbers (SLLN),

$$\begin{aligned} \hat{p}_{h}(x)-p(x)=O\left( \sqrt{\frac{\ln \ln n}{nh}}+h^{2}\right) \text { a.s.} \end{aligned}$$
(24)

It is straightforward to check that if the bandwidth satisfies \(h\rightarrow 0\), \(\frac{\ln n}{nh}\rightarrow 0\) and \(\frac{nh^{5}}{\ln n}=O(1)\), with the convergence \(\hat{p}_{h}(x)\rightarrow p(x)\text { a.s.}\) proved in Lemma 7 of López-Cheda et al. (2017), it directly follows that

$$\begin{aligned} A_{12}=O\left( \left( \frac{\ln n}{nh}\right) ^{3/4}\right) \, \mathrm{a.s.} \end{aligned}$$
(25)

With respect to \(A_{22}\) in (21), if \(h\rightarrow 0\), \(\frac{\ln n}{nh}\rightarrow 0\) and \(\frac{nh^{5}}{\ln n}=O(1)\), using the almost sure consistency of \(\hat{p}_{h}(x)\), it follows from (24) that

$$\begin{aligned} A_{22}=O\left( \left( \frac{\ln n}{nh}\right) ^{3/4}\right) \, \text {a.s.} \end{aligned}$$
(26)

The proof of the theorem follows from the decomposition (19) and the results (22), (23), (25) and (26). \(\square \)

Proof of Theorem 2

From Theorem 1, the latency estimator can be decomposed as

$$\begin{aligned} \hat{S}_{0,h}(t|x)-S_{0}(t|x)=C_{1}+C_{2}+O\left( \left( \frac{\ln n}{nh} \right) ^{3/4}\right) \text { a.s.}, \end{aligned}$$

where

$$\begin{aligned} C_{1}= & {} -\frac{S(t|x)}{p(x)}\sum _{i=1}^{n}\tilde{B}_{h,i}(x)\xi (T_{i},\delta _{i},t,x),\\ C_{2}= & {} -\frac{(1-p(x))(1-S(t|x))}{p^{2}(x)}\sum _{i=1}^{n}\tilde{B} _{h,i}(x)\xi (T_{i},\delta _{i},\infty ,x), \end{aligned}$$

with \(\tilde{B}_{h,i}(x)\) in (8) and \(\xi \) in (7). Then, the AMSE of \(\hat{S}_{0,h}(t|x)\) is

$$\begin{aligned} \mathrm{AMSE}(\hat{S}_{0,h}(t|x))=E(C_{1}^{2})+E(C_{2}^{2})+2E(C_{1}\cdot C_{2}). \end{aligned}$$
(27)

We start with the first term of AMSE\((\hat{S}_{0, h} (t|x) )\). Note that

$$\begin{aligned} E(C_{1}^{2})=\mathrm{Var}(C_{1})+(E(C_{1}))^{2}, \end{aligned}$$
(28)

where

$$\begin{aligned} \mathrm{Var}(C_{1}) = \frac{1}{n h^2} \left( \frac{S(t|x)}{p(x)}\right) ^{2} \frac{1}{m^2(x)} \mathrm{Var} \left( K \left( \frac{x - X_1}{h} \right) \xi (T_1, \delta _1, t, x) \right) \end{aligned}$$
(29)

and

$$\begin{aligned}&\mathrm{Var} \left( K \left( \frac{x - X_1}{h} \right) \xi (T_1, \delta _1, t, x) \right) \nonumber \\&\quad = E \left( K^2 \left( \frac{x - X_1}{h} \right) \xi ^2(T_1, \delta _1, t, x) \right) - \left[ E \left( K \left( \frac{x - X_1}{h} \right) \xi (T_1, \delta _1, t,x) \right) \right] ^2.\nonumber \\ \end{aligned}$$
(30)

Let us consider \(\varPhi _{1}(y,t,x)\) defined in (9). From a change of variable and a Taylor expansion, then the first term in (30) is

$$\begin{aligned} E\left[ K^{2}\left( \frac{x-X_{1}}{h}\right) \xi ^{2}(T_{1},\delta _{1},t,x) \right] =h\varPhi _{1}(x,t,x)m(x)c_{K}+O(h^{3}). \end{aligned}$$
(31)

For the second term in (30), applying a change of variable, a Taylor expansion, and taking into account the symmetry of K, it follows that

$$\begin{aligned} \left[ E\left( K\left( \frac{x-X_{1}}{h}\right) \xi (T_{1},\delta _{1},t,x)\right) \right] ^{2}=\left[ \varPhi (x,t,x)m(x)h+O(h^{3})\right] ^{2}=O(h^{6}), \end{aligned}$$
(32)

where \(\varPhi (y,t,x)=E\left[ \xi (T,\delta ,t,x)|X=y\right] \) and, as will be proved in Lemma 4, \(\varPhi (x,t,x)=0\) for all \(t\ge 0\).

From (29), (30), (31) and (32), then

$$\begin{aligned} \mathrm{Var}(C_{1})=\frac{1}{nh}\left( \frac{S(t|x)}{p(x)}\right) ^{2}\frac{1}{m(x)} \varPhi _{1}(x,t,x)c_{K}+O\left( \frac{h}{n}\right) . \end{aligned}$$

Continuing with the second term in the right-hand side of (28):

$$\begin{aligned} E(C_{1}) =-\frac{1}{h}\frac{S(t|x)}{m(x)p(x)}E\left[ K\left( \frac{x-X_{1}}{h} \right) \xi (T_{1},\delta _{1},t,x)\right] . \end{aligned}$$

Using a Taylor expansion, and \(\varPhi (x,t,x)=0 \; \forall t\ge 0\), then

$$\begin{aligned} E(C_{1})=-\frac{1}{2}h^{2}\frac{S(t|x)}{p(x)m(x)}d_{K}\left( \varPhi ^{\prime \prime }\left( x,t,x\right) m(x)+2\varPhi ^{\prime }\left( x,t,x\right) m^{\prime }(x)\right) +o(h^{2}). \end{aligned}$$

So the first term of \(\mathrm{AMSE}(\hat{S}_{0,h}(t|x))\) in (27) is

$$\begin{aligned} E(C_{1}^{2})= & {} \frac{1}{4}h^{4}d_{K}^{2}\left[ \frac{S(t|x)}{p(x)m(x)}\left( \varPhi ^{\prime \prime }\left( x,t,x\right) m(x)+2\varPhi ^{\prime }\left( x,t,x\right) m^{\prime }(x)\right) \right] ^{2} \nonumber \\&+\frac{1}{nh}\left( \frac{S(t|x)}{p(x)}\right) ^{2}\frac{1}{ m(x)}\varPhi _{1}(x,t,x)c_{K} +o(h^{4})+O\left( \frac{h}{n} \right) . \end{aligned}$$
(33)

Following the same ideas as those for \(C_{1}\), we obtain for \(C_{2}\) that

$$\begin{aligned} E(C_{2}^{2})= & {} \frac{1}{nh}\left( \frac{(1-S(t|x))(1-p(x))}{p^{2}(x)} \right) ^{2}\frac{1}{m(x)}\varPhi _{1}(x,\infty ,x)c_{K} \nonumber \\&+\frac{1}{4}h^{4}d_{K}^{2}\left[ \frac{(1-S(t|x))(1-p(x))}{p^{2}(x)m\left( x\right) } \right. \nonumber \\&\times \left. \left( \varPhi ^{\prime \prime }\left( x,\infty ,x\right) m(x)+2\varPhi ^{\prime }\left( x,\infty ,x\right) m^{\prime }(x)\right) \right] ^{2}o(h^{4})+O\left( \frac{h}{n}\right) .\nonumber \\ \end{aligned}$$
(34)

We continue studying the third term of AMSE\((\hat{S}_{0,h}(t|x))\) in (27):

$$\begin{aligned} E\left( C_{1}\cdot C_{2}\right) =\frac{(1-p(x))S(t|x)(1-S(t|x)}{p^{3}(x)}\left[ n(n-1)\alpha \beta +n\gamma \right] , \end{aligned}$$

where

$$\begin{aligned} \alpha= & {} E\left[ \tilde{B}_{h1}(x)\xi (T_{1},\delta _{1},t,x)\right] , \\ \beta= & {} E\left[ \tilde{B}_{h1}(x)\xi (T_{1},\delta _{1},\infty ,x)\right] \text {,} \\ \gamma= & {} E\left[ \tilde{B}_{h1}^{2}(x)\xi (T_{1},\delta _{1},t,x)\xi (T_{1},\delta _{1},\infty ,x)\right] . \end{aligned}$$

Using a Taylor expansion and \(\varPhi (x,t,x)=0\) for all \(t\ge 0\), the terms \(\alpha \) and \(\beta \) are

$$\begin{aligned} \alpha= & {} \frac{1}{2}\frac{h^{2}}{n}d_{K}\frac{1}{m(x)}\left( \varPhi ^{\prime \prime }\left( x,t,x\right) m(x)+2\varPhi ^{\prime }\left( x,t,x\right) m^{\prime }(x)\right) +o\left( \frac{h^{2}}{n}\right) , \end{aligned}$$
(35)
$$\begin{aligned} \beta= & {} \frac{1}{2}\frac{h^{2}}{n}d_{K}\frac{1}{m(x)}\left( \varPhi ^{\prime \prime }\left( x,\infty ,x\right) m(x)+2\varPhi ^{\prime }\left( x,\infty ,x\right) m^{\prime }(x)\right) +o\left( \frac{h^{2}}{n}\right) . \end{aligned}$$
(36)

For the term \(\gamma \), it follows that

$$\begin{aligned} \gamma= & {} \frac{1}{n^{2}h^{2}}\frac{1}{m^{2}(x)}\int K^{2}\left( \frac{x-y}{h} \right) \varPhi _{2}(y,t,x)m(y)\mathrm{d}y \nonumber \\= & {} \frac{1}{n^{2}h}\frac{1}{m(x)}\varPhi _{2}(x,t,x)c_{K}+O\left( \frac{h}{ n^{2}}\right) , \end{aligned}$$
(37)

where \(\varPhi _{2}(y,t,x)=E\left[ \xi (T,\delta ,t,x)\xi (T,\delta ,\infty ,x)|X=y\right] .\) From (35), (36) and (37), the third term of \(AMSE( \hat{S}_{0,h}(t|x))\) in (27) is:

$$\begin{aligned} E\left( C_{1}\cdot C_{2}\right)= & {} \frac{(1-p(x))S(t|x)(1-S(t|x)}{p^{3}(x)}\left[ \frac{1}{4}h^{4}d_{K}^{2}\frac{1}{m^{2}(x)}\right. \nonumber \\&\times \left( \varPhi ^{\prime \prime }\left( x,t,x\right) m(x)+2\varPhi ^{\prime }\left( x,t,x\right) m^{\prime }(x)\right) \nonumber \\&\left. \times \left( \varPhi ^{\prime \prime }\right. \left( x,\infty ,x\right) m(x)+2\varPhi ^{\prime }\left( x,\infty ,x\right) m^{\prime }(x)\right) \nonumber \\&+\left. \frac{1}{nh}\frac{1}{m(x)}\varPhi _{2}(x,t,x)c_{K} \right] +o\left( h^{4}\right) +O\left( \frac{h}{n}\right) . \end{aligned}$$
(38)

Compiling (33), (34) and (38), the \(\mathrm{AMSE}(\hat{S} _{0,h}(t|x))\) in (27) is

$$\begin{aligned} \mathrm{AMSE}(\hat{S}_{0,h}(t|x))= & {} \frac{1}{nh}\frac{1}{m(x)}c_{K} \left( \left( \frac{S(t|x)}{p(x)}\right) ^{2}\varPhi _{1}(x,t,x)\right. \nonumber \\&\quad \left. + \left( \frac{(1-S(t|x))(1-p(x))}{p^{2}(x)}\right) ^{2}\varPhi _{1}(x,\infty ,x)\right. \\&\quad +\left. 2\frac{(1-p(x))S(t|x)(1-S(t|x))}{p^{3}(x)}\varPhi _{2}(x,t,x)\right) \\&\quad +\frac{1}{4}h^{4}d_{K}^{2}\frac{1}{m^{2}(x)}\left( \frac{S(t|x)}{p(x)}\left( \varPhi ^{\prime \prime }\left( x,t,x\right) m(x)+ 2 \varPhi ^{\prime }\left( x,t,x\right) m^{\prime }(x)\right) \right. \\&\quad +\,\frac{(1-S(t|x))(1-p(x))}{p^{2}(x)} ( \varPhi ^{\prime \prime }\left( x,\infty ,x\right) m(x)\\&\quad \left. +\,2\varPhi ^{\prime }\left( x,\infty ,x\right) m^{\prime }(x)) \right) ^{2} \nonumber \\&\quad +\, o(h^{4}) +O\left( \frac{h}{n}\right) . \end{aligned}$$

Since, from (40) and (41), in Lemmas 5 and 6 it is proven that

$$\begin{aligned} \varPhi _{1}(x,t,x)=\varPhi _{2}(x,t,x)=\int _{0}^{t}\frac{\mathrm{d}H^{1}\left( v|x\right) }{\left( 1-H(v|x)\right) ^{2}}, \end{aligned}$$

and considering (10)–(14), the AMSE of \(\hat{S}_{0,h}(t|x)\) is, finally, that in (15).

This completes the proof. \(\square \)

Lemma 4

The term \(\varPhi \left( y,t,x\right) \) in (8) has the following expression:

$$\begin{aligned} \varPhi \left( y,t,x\right) =\int _{0}^{t}\frac{\mathrm{d}H^{1}\left( v|y\right) }{ 1-H(v|x)}-\int _{0}^{t}(1-H(v|y))\frac{\mathrm{d}H^{1}(v|x)}{\left( 1-H(v|x)\right) ^{2}}, \end{aligned}$$

and consequently, \(\varPhi \left( x,t,x\right) =0\) for any \(t\ge 0\).

Proof of Lemma 4

Let us recall \(\varPhi \left( y,t,x\right) =E\left[ \xi (T,\delta ,t,x)|X=y\right] \), then

$$\begin{aligned} \varPhi \left( y,t,x\right)= & {} E\left[ \frac{1\{T\le t,\delta =1\}}{1-H(T|x)}\bigg |X=y\right] -E\left[ \int _{0}^{t}\frac{1\{y \le T\} \mathrm{d}H^{1}(u|x)}{ \left( 1-H(u|x)\right) ^{2}}\bigg |X=y\right] \\= & {} A^{\prime }-A^{\prime \prime }. \end{aligned}$$

We start with \(A^{\prime }\):

$$\begin{aligned} A^{\prime } =E\left[ \frac{1\{T\le t\}}{1-H(T|x)}E\left( \delta |T,X=y\right) \right] =\int \limits _{0}^{t}\frac{q(v,y)\mathrm{d}H(v|y)}{1-H(v|x)}=\int _{0}^{t}\frac{ \mathrm{d}H^{1}\left( v|y\right) }{1-H(v|x)}, \end{aligned}$$

where \(q\left( t,y\right) =E\left( \delta |T=t,X=y\right) \) and \(H_{1}\left( t|y\right) =P\left( T\le t,\delta =1|X=y\right) \).

We continue with \(A^{\prime \prime }\):

$$\begin{aligned} A^{\prime \prime }=\int _{0}^{t}E\left[ 1\{v\le T\}|X=y\right] \frac{\mathrm{d}H^{1}(v|x)}{\left( 1-H(v|x)\right) ^{2}} =\int _{0}^{t}(1-H(v|y))\frac{\mathrm{d}H^{1}(v|x)}{\left( 1-H(v|x)\right) ^{2}}. \end{aligned}$$

Then,

$$\begin{aligned} \varPhi \left( y,t,x\right) =\int _{0}^{t}\frac{\mathrm{d}H^{1}\left( v|y\right) }{ 1-H(v|x)}-\int _{0}^{t}(1-H(v|y))\frac{\mathrm{d}H^{1}(v|x)}{\left( 1-H(v|x)\right) ^{2}}, \end{aligned}$$
(39)

and therefore, \(\varPhi \left( x,t,x\right) =0\) for any \(t\ge 0\). \(\square \)

Lemma 5

The term \(\varPhi _1(y,t,x)\) in (9) verifies, for any \(t \in [a,b]\),

$$\begin{aligned} \varPhi _{1}\left( x,t,x\right) =\int _{0}^{t}\frac{\mathrm{d}H^{1}\left( v|x\right) }{\left( 1-H(v|x)\right) ^{2}}. \end{aligned}$$
(40)

Proof of Lemma 5

Note that \(\varPhi _{1}\left( y,t,x\right) =E\left[ \xi ^{2}(T,\delta ,t,x)|X=y\right] \), with \(\xi \) in (7). Then,

$$\begin{aligned} \varPhi _{1}\left( y,t,x\right)= & {} E\left[ \frac{1\{T\le t,\delta =1\}}{\left( 1-H(T|x)\right) ^{2}}\bigg |X=y\right] \\&+E\left[ \int _{0}^{t}\int _{0}^{t}\frac{1 \{ u\le T \} 1\{ v\le T \} }{\left( 1-H(u|x)\right) ^{2}\left( 1-H(v|x)\right) ^{2}}\mathrm{d}H^{1}(u|x)\mathrm{d}H^{1}(v|x)\bigg |X=y\right] \\&-2E\left[ \frac{1\{T\le t,\delta =1\}}{1-H(T|x)}\int _{0}^{t}\frac{1\{u\le T \} \mathrm{d}H^{1}(u|x)}{\left( 1-H(u|x)\right) ^{2}}\bigg |X=y\right] \\= & {} A+B-2C. \end{aligned}$$

The first term in the decomposition of \(\varPhi _{1}\left( y,t,x\right) \) is

$$\begin{aligned} A=\int _{0}^{t}\frac{q\left( v,y\right) }{\left( 1-H(v|x)\right) ^{2}} \mathrm{d}H(v|y)=\int _{0}^{t}\frac{\mathrm{d}H^{1}\left( v|y\right) }{\left( 1-H(v|x)\right) ^{2}}. \end{aligned}$$

The second term is

$$\begin{aligned} B=\int _{0}^{t}\int _{0}^{t}\frac{1-H\left( \max \left( w,v\right) |y\right) }{\left( 1-H(v|x)\right) ^{2}\left( 1-H(w|x)\right) ^{2}} \mathrm{d}H^{1}(v|x)\mathrm{d}H^{1}(w|x). \end{aligned}$$

Integrating in the supports \(\left\{ (v,w) \in \left[ 0,t\right] \times \left[ 0,t\right] /v\le w\right\} \) and \(\left\{ \left( v,w \right) \in \right. \) \(\left[ 0,t\right] \times \left. \left[ 0,t\right] /w < v\right\} \), the term B is

$$\begin{aligned} B=2\int _{0}^{t}\frac{1}{\left( 1-H(v|x)\right) ^{2}}\left( \int _{v}^{t} \frac{1-H\left( w|y\right) }{\left( 1-H(w|x)\right) ^{2}}\mathrm{d}H^{1}(w|x)\right) \mathrm{d}H^{1}(v|x). \end{aligned}$$

Finally, the third term in the decomposition of \(\varPhi _{1}\left( y,t,x\right) \) is

$$\begin{aligned} C=\int _{0}^{t}\frac{1}{\left( 1-H(u|x)\right) ^{2}}\left( \int _{u}^{t}\frac{ \mathrm{d}H^{1}\left( v|y\right) }{1-H(v|x)}\right) \mathrm{d}H^{1}(u|x). \end{aligned}$$

Note that, for \(y=x\), we have that \(B=2C\). This completes the proof. \(\square \)

Lemma 6

The expression for the term \(\varPhi _{2}(x,t,x)\), for any \(t \in [a,b]\), is the following:

$$\begin{aligned} \varPhi _{2}(x,t,x)=\int _{0}^{t}\frac{\mathrm{d}H^{1}\left( v|x\right) }{\left( 1-H(v|x)\right) ^{2}}. \end{aligned}$$
(41)

Proof of Lemma 6

Recall \(\varPhi _{2}(y,t,x)=E\left[ \xi \left( T,\delta ,t,x\right) \xi (T,\delta ,\infty ,x)|X=y\right] \) with \(\xi \) in (7). Then:

$$\begin{aligned}&\varPhi _{2}(y,t,x) \\&\quad =E\left[ \frac{1\{T\le t,\delta =1\}}{\left( 1-H(T|x)\right) ^{2}}\bigg |X=y\right] \\&\qquad -E\left[ \frac{1\{\delta =1\}}{1-H(T|x)}\int _{0}^{\infty }\frac{1\{u\le T\le t\} }{\left( 1-H(u|x)\right) ^{2}}\mathrm{d}H^{1}(u|x)\bigg |X=y \right] \\&\qquad -E\left[ \frac{1\{\delta =1\}}{1-H(T|x)}\int _{0}^{t}\frac{1\{ v\le T\} }{\left( 1-H(v|x)\right) ^{2}}\mathrm{d}H^{1}(v|x)\bigg |X=y\right] \\&\qquad +E\left[ \int _{0}^{t}\frac{1\{ v\le T \} dH^{1}(v|x)}{\left( 1-H(v|x)\right) ^{2}}\int _{0}^{\infty }\frac{1\{ u\le T\} \mathrm{d}H^{1}(u|x)}{\left( 1-H(u|x)\right) ^{2}}\bigg |X=y\right] \\&\quad =A-B-C+D. \end{aligned}$$

Straightforward calculations yield:

$$\begin{aligned} A= & {} \int _{0}^{t}\frac{\mathrm{d}H^{1}\left( v|y\right) }{\left( 1-H(v|x)\right) ^{2}},\\ B= & {} \int _{0}^{\infty }\left( \int _{u}^{t}\frac{\mathrm{d}H^{1}\left( v|y\right) }{ 1-H(v|x)}\right) \frac{\mathrm{d}H^{1}(u|x)}{\left( 1-H(u|x)\right) ^{2}},\\ C= & {} \int _{0}^{t}\left( \int _{v}^{\infty }\frac{\mathrm{d}H^{1}\left( u|y\right) }{ 1-H(u|x)}\right) \frac{\mathrm{d}H^{1}(v|x)}{\left( 1-H(v|x)\right) ^{2}},\\ D= & {} \int _{0}^{t}\frac{1}{\left( 1-H(v|x)\right) ^{2}}\left( \int _{0}^{\infty } \frac{1-H\left( \max \left( u,v\right) |y\right) }{\left( 1-H(u|x)\right) ^{2}}\mathrm{d}H^{1}(u|x)\right) \mathrm{d}H^{1}(v|x). \end{aligned}$$

Integrating in the supports \(\left\{ \left( u,v \right) \in \left[ 0,\infty \right) \times \left[ 0,t \right] /v\le u\right\} \) and \(\left\{ \left( u,v \right) \in \right. \) \( \left[ 0,\infty \right) \left. \times \left[ 0,t \right] /u< v\right\} =\left\{ \left( u, v \right) \in \left[ 0, t \right] \times \left[ 0, t \right] /u < v \right\} \), the term D is

$$\begin{aligned} D= & {} \int _{0}^{t}\left( \int _{v}^{\infty }\frac{1-H\left( u|y\right) }{ \left( 1-H(u|x)\right) ^{2}}\mathrm{d}H^{1}(u|x)\right) \frac{\mathrm{d}H^{1}(v|x)}{\left( 1-H(v|x)\right) ^{2}} \\&+\int _{0}^{\infty }\left( \int _{u}^{t}\frac{1-H\left( v|y\right) }{\left( 1-H(v|x)\right) ^{2}}\mathrm{d}H^{1}(v|x)\right) \frac{\mathrm{d}H^{1}(u|x)}{\left( 1-H(u|x)\right) ^{2}}. \end{aligned}$$

When \(y=x\), then \(D=C+B\), which concludes the proof. \(\square \)

Proof of Theorem 3

Under assumptions (A1)–(A10) and using Theorem 1, \(\sqrt{nh}\left( \hat{S}_{0,h}(t|x)-S_{0}(t|x)\right) \) has the same limit distribution as

$$\begin{aligned} \sqrt{nh}\sum _{i=1}^{n}\eta _{h}(T_{i},\delta _{i},X_{i},t,x)=-\left( I+II+III+IV\right) , \end{aligned}$$

where

$$\begin{aligned} I= & {} \sqrt{nh}\frac{1}{nh}\frac{S(t|x)}{p(x)m(x)} \\\times & {} \sum _{i=1}^{n}\left[ K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},t,x)-E\left( K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},t,x)\right) \right] , \\ II= & {} \sqrt{nh}\frac{1}{nh}\frac{(1-p(x))(1-S(t|x))}{p^{2}(x)m\left( x\right) } \\\times & {} \sum _{i=1}^{n}\left[ K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},\infty ,x)-E\left( K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},\infty ,x)\right) \right] , \\ III= & {} \sqrt{nh}\frac{1}{nh}\frac{S(t|x)}{p(x)m(x)}\sum _{i=1}^{n}E\left[ K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},t,x)\right] , \\ IV= & {} \sqrt{nh}\frac{1}{nh}\frac{(1-p(x))(1-S(t|x))}{p^{2}(x)m\left( x\right) }\sum _{i=1}^{n}E\left[ K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},\infty ,x)\right] . \end{aligned}$$

The deterministic part b(tx) comes from \(III+IV\). Recall the function \(\varPhi (y,t,x)\) in (39), since \(\varPhi (x,t,x)=0\), then

$$\begin{aligned}&E\left[ K\left( \frac{x-X}{h}\right) \xi (T,\delta ,t,x)\right] \nonumber \\&\quad = \frac{1}{2} h^{3}d_{K}\left( \varPhi ^{\prime \prime }(x,t,x)m(x) + 2\varPhi ^{\prime }(x,t,x)m^{\prime }(x)\right) +o(h^{3}). \end{aligned}$$
(42)

Therefore,

$$\begin{aligned} III= & {} \sqrt{nh^{5}}\frac{S(t|x)}{p(x)m(x)}\frac{1}{2}d_{K}\left( \varPhi ^{\prime \prime }(x,t,x)m(x)+2\varPhi ^{\prime }(x,t,x)m^{\prime }(x)\right) \left( 1+o\left( 1\right) \right) ,\\ IV= & {} \sqrt{nh^{5}}\frac{(1-p(x))(1-S(t|x))}{p^{2}(x)m\left( x\right) }\frac{ 1}{2}d_{K}\nonumber \\&\quad \times \left( \varPhi ^{\prime \prime }(x,\infty ,x)m(x) + 2\varPhi ^{\prime }(x,\infty ,x)m^{\prime }(x)\right) \left( 1+o(1)\right) . \end{aligned}$$

If \(nh^{5}\rightarrow 0\), then \(III+IV=o\left( 1\right) \) and \(b\left( t,x\right) =0\). On the other hand, if \(nh^{5}\rightarrow C^{5}\) then

$$\begin{aligned} b(t,x)= & {} C^{5/2}\frac{S(t|x)}{p(x)m(x)}\frac{1}{2}d_{K}\left( \varPhi ^{\prime \prime }(x,t,x)m(x)+2\varPhi ^{\prime }(x,t,x)m^{\prime }(x)\right) \\&+\,C^{5/2}\frac{(1-p(x))(1-S(t|x))}{p^{2}(x)m\left( x\right) }\frac{1}{2} d_{K}( \varPhi ^{\prime \prime }(x,\infty ,x)m(x)+\, 2\varPhi ^{\prime }(x,\infty ,x)m^{\prime }(x)). \end{aligned}$$

As for the asymptotic distribution of \(I+II\), it is immediate to prove that:

$$\begin{aligned} I+II=\sum _{i=1}^{n}\left( \gamma _{i,n}(x,t)+\varGamma _{i,n}(x,t)\right) , \end{aligned}$$

where

$$\begin{aligned} \gamma _{i,n}(x,t)= & {} \frac{1}{\sqrt{nh}}\frac{S(t|x)}{p(x)m(x)}\\\times & {} \left[ K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},t ,x)-E\left( K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},t ,x)\right) \right] , \\ \varGamma _{i,n}(x,t)= & {} \frac{1}{\sqrt{nh}}\frac{(1-p(x))(1-S(t|x))}{ p^{2}(x)m\left( x\right) } \\\times & {} \left[ K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},\infty ,x)-E\left( K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},\infty ,x)\right) \right] , \end{aligned}$$

are n independent variables with mean 0. To prove the asymptotic normality of \(I+II\), it is only necessary to show that \(\sigma _{i,n}^{2}\left( x,t\right) =Var\left( \gamma _{i,n}(x,t)+\varGamma _{i,n}(x,t)\right) \) \( <\infty \), \(\sigma _{n}^{2}\left( x,t\right) =\sum _{i=1}^{n}\sigma _{i,n}^{2}\left( x,t\right) \) is positive and that the Lindeberg’s condition is satisfied, so Lindeberg’s theorem for triangular arrays (Theorem 7.2 in Billingsley (1968), p. 42) can be applied to obtain

$$\begin{aligned} \frac{\sum _{i=1}^{n}\left( \gamma _{i,n}(x,t)+\varGamma _{i,n}(x,t)\right) }{ \sigma _{n}\left( x,t\right) }\rightarrow N\left( 0,1\right) , \end{aligned}$$

and consequently,

$$\begin{aligned} \frac{\sqrt{nh}\sum _{i=1}^{n}\eta _{h}(T_{i},\delta _{i},X_{i},t,x)}{\sigma _{n}\left( x,t\right) }\rightarrow N\left( 0,1\right) . \end{aligned}$$

We will start proving that the variance

$$\begin{aligned} \sigma _{i,n}^{2}\left( x,t\right) =\mathrm{Var}\left( \gamma _{i,n}(x,t)\right) +\mathrm{Var}\left( \varGamma _{i,n}(x,t)\right) +2\mathrm{Cov}\left( \gamma _{i,n}(x,t),\varGamma _{i,n}(x,t)\right) \end{aligned}$$
(43)

is finite. Note that

$$\begin{aligned} Var\left( \gamma _{i,n}(x,t)\right)= & {} \frac{1}{nh}\left( \frac{S(t|x)}{p(x)m(x)}\right) ^{2}\left\{ E\left[ K^{2}\left( \frac{x-X_{1}}{h}\right) \xi ^{2}(T_{1},\delta _{1},t,x)\right] \right. \\&\left. -E\left[ K\left( \frac{x-X_{1}}{h}\right) \xi (T_{1},\delta _{1},t,x)\right] ^{2}\right\} . \end{aligned}$$

Let us define \(\varPhi _{1}(y,t,x)=E\left[ \xi ^{2}(T,\delta ,t,x)|X=y\right] \), using (42), then the first term in (43) is

$$\begin{aligned} Var\left( \gamma _{i,n}(x,t)\right) =\frac{1}{n}\left( \frac{S(t|x)}{p(x)} \right) ^{2}\frac{\varPhi _{1}(x,t,x)}{m\left( x\right) }c_{K}+O\left( \frac{ h^{2}}{n}\right) . \end{aligned}$$
(44)

In a similar way, the second term in (43) is

$$\begin{aligned} Var\left( \varGamma _{i,n}(x,t)\right) =\frac{1}{n}\left( \frac{(1-p(x))(1-S(t|x))}{p^{2}(x)}\right) ^{2}\frac{ \varPhi _{1}(x,\infty ,x)}{m\left( x\right) }c_{K}+O\left( \frac{h^{2}}{n} \right) . \end{aligned}$$
(45)

Finally, for the third term in (43),

$$\begin{aligned}&Cov\left( \gamma _{i,n}(x,t),\varGamma _{i,n}(x,t)\right) \nonumber \\&\quad = \frac{1}{nh}\left\{ E\left[ K^{2}\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},\infty ,x)\xi (T_{i},\delta _{i},t,x)\right] \right. \\&\qquad \left. -E\left[ K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},t,x)\right] E\left[ K\left( \frac{x-X_{i}}{h}\right) \xi (T_{i},\delta _{i},\infty ,x)\right] \right\} . \end{aligned}$$

Let us consider \(\varPhi _{2}(y,t,x)=E\left[ \xi (T,\delta ,t,x)\xi (T,\delta ,\infty ,x)|X=y\right] \). Applying Taylor expansions, the third term in (43) is

$$\begin{aligned} Cov\left( \gamma _{i,n}(x,t),\varGamma _{i,n}(x,t)\right) {=}\frac{1}{n}\frac{ (1\!-\!p(x))S(t|x)(1\!-\!S(t|x))}{p^{3}(x)m(x) }\varPhi _{2}(x,t,x)c_{K}+O\left( \frac{h}{n}\right) . \end{aligned}$$
(46)

The results (44), (45) and (46), together with (40) and (41), lead to

$$\begin{aligned} \sigma _{i,n}^{2}\left( x,t\right) =\frac{c_{K}}{n}\left( V_{1}\left( t,x\right) +V_{2}\left( t,x\right) +2V_{3}\left( t,x\right) \right) +O\left( \frac{h}{n}\right) , \end{aligned}$$

where \(V_{1}\left( t,x\right) \), \(V_{2}\left( t,x\right) \) and \(V_{3}\left( t,x\right) \) are defined in (12), (13) and (14), respectively. As a consequence, \(\sigma _{i,n}^{2}\left( x,t\right) <\infty \). The finiteness of the variance \(\sigma _{n}^{2}\left( x,t\right) \) is also proved, since

$$\begin{aligned} \sigma _{n}^{2}\left( x,t\right) \!=\!\sum _{i=1}^{n}\sigma _{i,n}^{2}\left( x,t\right) \!=\!V_{1}\left( t,x\right) c_{K}+V_{2}\left( t,x\right) c_{K}\!+\!2V_{3}\left( t,x\right) c_{K}+O\left( h\right) <+\infty . \end{aligned}$$

We continue studying Lindeberg’s condition:

$$\begin{aligned} \frac{1}{\sigma _{n}^{2}\left( x,t\right) }\sum _{i=1}^{n}\int _{\{|\gamma _{i,n}(x,t)+\varGamma _{i,n}(x,t)|>\epsilon \sigma _{n}\left( x,t\right) \}}(\gamma _{i,n}(x,t)+\varGamma _{i,n}(x,t))^{2}dP\rightarrow 0,\forall \epsilon >0. \end{aligned}$$
(47)

Let us define the indicator function \(I_{i,n}( x,t)=1 \left\{ \left( \gamma _{i,n}(x,t){+}\varGamma _{i,n}(x,t)\right) ^{2}\!>\! \epsilon ^{2}\sigma _{n}^{2}\right. \left. ( x,t) \right\} \). Then (47) can be expressed as

$$\begin{aligned} \frac{1}{\sigma _{n}^{2}\left( x,t\right) }E\left[ \sum _{i=1}^{n}(\gamma _{i,n}(x,t)+\varGamma _{i,n}(x,t))^{2}I_{i,n}\left( x,t\right) \right] =\frac{1 }{\sigma _{n}^{2}\left( x,t\right) }E\left( \eta _{n}\left( x,t\right) \right) , \end{aligned}$$

with

$$\begin{aligned} \eta _{n}\left( x,t\right) =\sum _{i=1}^{n}(\gamma _{i,n}(x,t)+\varGamma _{i,n}(x,t))^{2}I_{i,n}\left( x,t\right) . \end{aligned}$$

Since \(\frac{1}{nh}\rightarrow 0\), and the functions K and \(\xi \) are bounded, one has:

$$\begin{aligned}&\exists n_{0}\in \mathbb {N}/n\ge n_{0}\Rightarrow I_{i,n}(w)=0,\forall w \text { and }\forall i\in \{1,2,\dots ,n\} \\\Leftrightarrow & {} \exists n_{0}\in \mathbb {N}/n\ge n_{0}\Rightarrow \eta _{n}(w)=0,\forall w . \end{aligned}$$

Since \(\eta _{n}(x,t)\) is bounded, then the previous condition implies that \( \exists n_{0}\in \mathbb {N}/n\ge n_{0}\Rightarrow E(\eta _{n}(x,t))=0\), and then \(\lim _{n\rightarrow \infty }\frac{1}{\sigma _{n}^{2}}E(\eta _{n}(x,t))=0.\) Therefore, Lindeberg’s condition is proved. All these previous arguments lead to the proof of Theorem 3. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

López-Cheda, A., Jácome, M.A. & Cao, R. Nonparametric latency estimation for mixture cure models. TEST 26, 353–376 (2017). https://doi.org/10.1007/s11749-016-0515-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11749-016-0515-1

Keywords

Mathematics Subject Classification

Navigation