Skip to main content
Log in

Beran-based approach for single-index models under censoring

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

In this paper we propose a new method for estimating parameters in a single-index model under censoring based on the Beran estimator for the conditional distribution function. This, likelihood-based method is also a useful and simple tool used for bandwidth selection. Additionally, we perform an extensive simulation study comparing this new Beran-based approach with other existing method based on Kaplan–Meier integrals. Finally, we apply both methods to a primary biliary cirrhosis data set and propose a bootstrap test for the parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Beran R (1981) Nonparametric regression with randomly censored survival data. Technical Report, University of California, Berkley

  • Bouaziz O, Lopez O (2010) Conditional density estimation in a censored single-index model. Bernoulli 16:514–542

    Article  MathSciNet  MATH  Google Scholar 

  • Carroll RJ, Fan J, Gijbels I, Wand MP (1997) Generalized partially linear single-index models. J Am Stat Assoc 92:477–489

    Article  MathSciNet  MATH  Google Scholar 

  • Delecroix M, Hristache M, Patilea V (2006) On semiparametric M-estimation in single-index regression. J Stat Plan Inference 136:730–769

    Article  MathSciNet  MATH  Google Scholar 

  • Escanciano JC, Song K (2010) Testing single-index restrictions with a focus on average derivatives. J Econ 156:377–391

    Article  MathSciNet  Google Scholar 

  • Fleming T, Harrington D (1991) Counting processes and survival analysis. Wiley, New York

    MATH  Google Scholar 

  • González-Manteiga W, Cadarso-Suárez C (1994) Asymptotic properties of a generalized Kaplan–Meier estimator with some applications. J Nonparametr Stat 4:65–78

    Article  MATH  Google Scholar 

  • Härdle W, Hall P, Ichimura H (1993) Optimal smoothing in single-index models. Ann Stat 21:157–178

    Article  MATH  Google Scholar 

  • Härdle W, Mammen E, Proença I (2001) A bootstrap test for single index models. Statistics 35:427–451

    Article  MathSciNet  MATH  Google Scholar 

  • Hristache M, Juditsky A, Spokoiny V (2001) Direct estimation of the index coefficient in a single-index model. Ann Stat 29:595–623

    Article  MathSciNet  MATH  Google Scholar 

  • Huang ZS (2012) Corrected empirical likelihood inference for rightcensored partially linear single-index model. J Multivar Anal 105:276–284

    Article  MATH  Google Scholar 

  • Huang ZS, Lin B, Feng F, Pang Z (2013) Efficient penalized estimating method in the partially varying-coefficient single-index model. J Multivar Anal 114:189–200

    Article  MathSciNet  MATH  Google Scholar 

  • Huang ZS, Zhang R (2011) Efficient empirical–likelihood-based inferences for the single-index model. J Multivar Anal 102:937–947

    Article  MATH  Google Scholar 

  • Ichimura H (1993) Semiparametric least squares (SLS) and weighted SLS estimation of single-index models. J Econ 58:71–120

    Article  MathSciNet  MATH  Google Scholar 

  • Iglesias-Pérez MC, González-Manteiga W (2003) Bootstrap for the conditional distribution function with truncated and censored data. Ann Inst Stat Math 55:331–357

    MATH  Google Scholar 

  • Strzalkowska-Kominiak E, Cao R (2013) Maximum likelihood estimation for conditional distribution single-index models under censoring. J Multivar Anal 114:74–98

    Article  MathSciNet  MATH  Google Scholar 

  • Stute W, Zhu LX (2005) Nonparametric checks for single-index models. Ann Stat 33:1048–1083

    Article  MathSciNet  MATH  Google Scholar 

  • Wang JL, Xue L, Zhu L, Chong YS (2010) Estimation for a partial-linear single-index model. Ann Stat 38:246–274

    MathSciNet  MATH  Google Scholar 

  • Xia Y, Härdle W (2006) Semi-parametric estimation of partially linear single-index models. J Multivar Anal 97:1162–1184

    Article  MATH  Google Scholar 

  • Zhang R, Huang Z, Lv Y (2010) Statistical inference for the index parameter in single-index models. J Multivar Anal 101:1026–1041

    Article  MathSciNet  MATH  Google Scholar 

  • Zhu LX, Xue LG (2006) Empirical likelihood confidence regions in a partially linear single-index model. J R Stat Soc Ser B 68:549–570

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors acknowledge financial support from Ministerio de Economía y Competitividad Grant MTM2011-22392 (EU ERDF support included). Additionally, Ewa Strzalkowska-Kominiak acknowledges financial support from a Juan de la Cierva scholarship and ECO2011-25706 from the Spanish Ministerio de Economía y Competitividad.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ewa Strzalkowska-Kominiak.

Appendix: Proof of Theorem 1

Appendix: Proof of Theorem 1

Lemma 1

Let \(\tilde{F}_{n\theta }(y|\theta '\mathbf {x})\) be the Beran estimator. Under B1-B5, we have

  1. a)

    \( \nabla _{\theta }\log (1-\tilde{F}_{n\theta }(y|\theta '\mathbf {x}))\mathop {\rightarrow }\limits ^{n\rightarrow \infty } \nabla _{\theta }\log (1- F_{\theta }(y|\theta '\mathbf {x}))\) in probability and consequently

  2. b)

    \(\nabla _{\theta }\tilde{F}_{n\theta }(y|\theta '\mathbf {x})\mathop {\rightarrow }\limits ^{n\rightarrow \infty } \nabla _{\theta }F_{\theta }(y|\theta '\mathbf {x})\) in probability.

Proof

Recall that

$$\begin{aligned} \tilde{F}_{n\theta }(y|\theta '\mathbf {x})=1-\prod \limits _{i=1}^n\left[ 1-\frac{B_{in}(\theta '\mathbf {x})1_{\{Y_i\le y\}}\delta _i}{\sum _{j=1}^n 1_{\{Y_j\ge Y_i\} } B_{jn}(\theta '\mathbf {x})}\right] , \end{aligned}$$

where

$$\begin{aligned} B_{in}(\theta '\mathbf {x})=\frac{K\left( \frac{\theta '\mathbf {x}-\theta 'X_i}{h_1}\right) }{\sum _{j=1}^n K\left( \frac{\theta '\mathbf {x}-\theta 'X_j}{h_1}\right) }. \end{aligned}$$

Then

$$\begin{aligned}&{\log (1-\tilde{F}_{n\theta }(y|\theta '\mathbf {x}))= \sum _{i=1}^n \log \left( \frac{\sum _{j=1}^n 1_{\{Y_j> Y_i\} } B_{jn}(\theta '\mathbf {x})}{\sum _{j=1}^n 1_{\{Y_j\ge Y_i\} } B_{jn}(\theta '\mathbf {x})}\right) ^{1_{\{Y_i\le y\}}\delta _i}}\\&\quad = \sum _{i=1}^n 1_{\{Y_i\le y\}}\delta _i\left( \log \left( \sum _{j=1}^n 1_{\{Y_j> Y_i\} } B_{jn}(\theta '\mathbf {x})\right) -\log \left( \sum _{j=1}^n 1_{\{Y_j\ge Y_i\} } B_{jn}(\theta '\mathbf {x})\right) \right) \end{aligned}$$

Hence

$$\begin{aligned}&{\nabla _{\theta }\log (1-\tilde{F}_{n\theta }(y|\theta '\mathbf {x})) }\\&\quad = \sum _{i=1}^n 1_{\{Y_i\le y\}}\delta _i\left( \frac{\sum _{j=1}^n 1_{\{Y_j> Y_i\} } \nabla _{\theta }B_{jn}(\theta '\mathbf {x})}{\sum _{j=1}^n 1_{\{Y_j> Y_i\} } B_{jn}(\theta '\mathbf {x})}-\frac{\sum _{j=1}^n 1_{\{Y_j\ge Y_i\} } \nabla _{\theta }B_{jn}(\theta '\mathbf {x})}{\sum _{j=1}^n 1_{\{Y_j\ge Y_i\} } B_{jn}(\theta '\mathbf {x})}\right) \\&\quad = \sum _{i=1}^n 1_{\{Y_i\le y\}}\delta _i \frac{B_{in}(\theta '\mathbf {x})\sum _{j=1}^n 1_{\{Y_j> Y_i\} } \nabla _{\theta }B_{jn}(\theta '\mathbf {x})-\nabla _{\theta }B_{in}(\theta '\mathbf {x} )\sum _{j=1}^n 1_{\{Y_j> Y_i\} } B_{jn}(\theta '\mathbf {x} )}{\biggr (\sum _{j=1}^n 1_{\{Y_j> Y_i\} } B_{jn}(\theta '\mathbf {x} )\biggr )\biggr (\sum _{j=1}^n 1_{\{Y_j\ge Y_i\} } B_{jn}(\theta '\mathbf {x} )\biggr )}. \end{aligned}$$

Moreover,

$$\begin{aligned} \nabla _{\theta }B_{jn}(\theta '\mathbf {x} )&= \frac{\frac{\mathbf {x}-X_i}{nh_1^2} K'\left( \frac{\theta '\mathbf {x}-\theta 'X_i}{h_1}\right) }{\frac{1}{nh_1}\sum _{j=1}^n K\left( \frac{\theta '\mathbf {x} -\theta 'X_j}{h_1}\right) }\\&\quad - \frac{1}{nh_1}K\left( \frac{\theta '\mathbf {x}-\theta 'X_i}{h_1}\right) \frac{\frac{1}{nh_1^2}\sum _{j=1}^n (\mathbf {x}-X_j)K'\left( \frac{\theta '\mathbf {x}-\theta 'X_j}{h_1}\right) }{\left( \frac{1}{nh_1}\sum _{j=1}^n K\left( \frac{\theta '\mathbf {x} -\theta 'X_j}{h_1}\right) \right) ^2}. \end{aligned}$$

Let

$$\begin{aligned} \tilde{H}(y|\theta '\mathbf {x} )=\mathbb {P}(Z\le y, \delta =1|\theta 'X=\theta '\mathbf {x} ) \end{aligned}$$

and

$$\begin{aligned} H(y|\theta '\mathbf {x} )=\mathbb {P}(Z\le y|\theta 'X=\theta '\mathbf {x} ). \end{aligned}$$

Hence if \(H\) is continuous, it is easy to show that

$$\begin{aligned} \nabla _{\theta }\log (1-\tilde{F}_{n\theta }(y|\theta '\mathbf {x} ))\rightarrow -\nabla _{\theta }\int \limits _0^y \frac{\tilde{H}(dz|\theta '\mathbf {x} )}{1- H(z|\theta '\mathbf {x} )} \end{aligned}$$

Finally, using (vi) in González-Manteiga and Cadarso-Suárez (1994), it is obvious that

$$\begin{aligned} \nabla _{\theta }\log (1-F_{\theta }(y|\theta '\mathbf {x}))=-\nabla _{\theta }\int \limits _0^y \frac{\tilde{H}(dz|\theta '\mathbf {x} )}{1- H(z|\theta '\mathbf {x} )}. \end{aligned}$$

As to b), since \(\tilde{F}_{n\theta }(y|\theta '\mathbf {x} )\rightarrow F_{\theta }(y|\theta '\mathbf {x} )\) and \( \nabla _{\theta }\tilde{F}_{n\theta }(y|\theta '\mathbf {x} )=-\nabla _{\theta }\log (1-\tilde{F}_{n\theta }(y|\theta '\mathbf {x} ))(1-\tilde{F}_{n\theta }(y|\theta '\mathbf {x} ))\), the proof is completed.

Lemma 2

Let \(\tilde{f}_{\theta }(y|\theta '\mathbf {x} )\) be the estimated density and \(\tilde{F}_{\theta }^S(y|\theta '\mathbf {x} )\) the smoothed distribution function estimator defined in (2) and (3). Then, under B1-B5 and when \(n\rightarrow \infty \)

  1. a)

    \( \nabla _{\theta }\tilde{f}_{\theta }(y|\theta '\mathbf {x} )_{\theta =\theta _0}\rightarrow \nabla _{\theta }f_{\theta }(y|\theta '\mathbf {x} )_{\theta =\theta _0}\).

  2. b)

    \( \nabla _{\theta }(1-\tilde{F}_{\theta }^S(y|\theta '\mathbf {x} ))_{\theta =\theta _0}\rightarrow \nabla _{\theta }(1-F_{\theta }(y|\theta '\mathbf {x} ))_{\theta =\theta _0}\).

Proof

Recall,

$$\begin{aligned} \tilde{f}_{\theta }(y|\theta '\mathbf {x} )=\frac{1}{h_2}\sum _{i=1}^n W_{in}(\theta '\mathbf {x} ) K\left( \frac{y-Y_i}{h_2}\right) . \end{aligned}$$

Since \(\sum _{i=1}^n B_{in}(\theta '\mathbf {x} )=1\), we have

$$\begin{aligned} W_{in}(\theta '\mathbf {x} )=\frac{\delta _i B_{in}(\theta '\mathbf {x})}{1-G_{n\theta }(Y_i-|\theta '\mathbf {x} )}, \end{aligned}$$

where

$$\begin{aligned} 1-G_{n\theta }(y|\theta '\mathbf {x})=\prod \limits _{j=1}^n\left[ 1-\frac{B_{jn}(\theta '\mathbf {x} )1_{\{Y_j\le y\}}(1-\delta _j)}{\sum _{k=1}^n 1_{\{Y_k\ge Y_j\} } B_{kn}(\theta '\mathbf {x} )}\right] . \end{aligned}$$

Hence

$$\begin{aligned} \nabla _{\theta }W_{in}(\theta '\mathbf {x} )&= \frac{\delta _i \nabla _{\theta }B_{in}(\theta '\mathbf {x})}{1-G_{n\theta }(Y_i-|\theta '\mathbf {x} )}\\&\quad -\frac{\delta _i B_{in}(\theta '\mathbf {x} )}{1-G_{n\theta }(Y_i-|\theta '\mathbf {x})}\nabla _{\theta }(\log (1-G_{n\theta }(Y_i-|\theta '\mathbf {x} )). \end{aligned}$$

Furthermore,

$$\begin{aligned} \nabla _{\theta }\tilde{f}_{\theta }(y|\theta '\mathbf {x})=\frac{1}{h_2}\sum _{i=1}^n \nabla _{\theta } W_{in}(\theta '\mathbf {x}) K\left( \frac{y-Y_i}{h_2}\right) =A_n(y,\theta '\mathbf {x})+B_n(y,\theta '\mathbf {x} ), \end{aligned}$$

where

$$\begin{aligned} A_n(y,\theta '\mathbf {x} )=\frac{1}{h_2}\sum _{i=1}^n \frac{\delta _i \nabla _{\theta }B_{in}(\theta '\mathbf {x})}{1-G_{n\theta }(Y_i-|\theta '\mathbf {x} )} K\left( \frac{y-Y_i}{h_2}\right) \end{aligned}$$

and

$$\begin{aligned} B_n(y,\theta '\mathbf {x} )=-\frac{1}{h_2}\sum _{i=1}^n W_{in}(\theta '\mathbf {x} ) \nabla _{\theta }(\log (1-G_{n\theta }(Y_i-|\theta '\mathbf {x}))K\left( \frac{y-Y_i}{h_2}\right) . \end{aligned}$$

As in the proof of Lemma 1, we have

$$\begin{aligned} \nabla _{\theta }B_{in}(\theta '\mathbf {x} )&= \frac{\frac{\mathbf {x}-X_i}{nh_1^2} K'\left( \frac{\theta '\mathbf {x}-\theta 'X_i}{h_1}\right) }{\frac{1}{nh_1}\sum _{j=1}^n K\left( \frac{\theta '\mathbf {x} -\theta 'X_j}{h_1}\right) } \\&\quad - \frac{1}{nh_1}K\left( \frac{\theta '\mathbf {x}-\theta 'X_i}{h_1}\right) \frac{\frac{1}{nh_1^2}\sum _{j=1}^n (\mathbf {x}-X_j)K'\left( \frac{\theta '\mathbf {x}-\theta 'X_j}{h_1}\right) }{\left( \frac{1}{nh_1}\sum _{j=1}^n K\left( \frac{\theta '\mathbf {x} -\theta 'X_j}{h_1}\right) \right) ^2}. \end{aligned}$$

Hence, it is easy to show that

$$\begin{aligned} A_n(y,\theta '\mathbf {x} )=A_{1n}(y,\theta '\mathbf {x})+A_{2n}(y,\theta '\mathbf {x} )+o_{\mathbb {P}}(1), \end{aligned}$$

where

$$\begin{aligned}&A_{1n}(y,\theta '\mathbf {x} )\\&\quad =\frac{1}{f_{\theta 'X}(\theta '\mathbf {x})}\frac{1}{nh_1^2 h_2}\sum _{i=1}^n \frac{\delta _i }{1\!-\!G_{\theta }(Y_i-|\theta '\mathbf {x} )} (\mathbf {x}\!-\!X_i)K'\left( \frac{\theta '\mathbf {x}\!-\!\theta 'X_i}{h_1}\right) K\left( \frac{y-Y_i}{h_2}\right) \end{aligned}$$

and

$$\begin{aligned}&A_{2n}(y,\theta '\mathbf {x})\\&\quad =\frac{\nabla _{\theta }f_{\theta 'X}(\theta '\mathbf {x})}{f_{\theta 'X}^2(\theta '\mathbf {x} )}\frac{1}{nh_1 h_2}\sum _{i=1}^n \frac{\delta _i }{1-G_{\theta }(Y_i-|\theta '\mathbf {x})}K\left( \frac{\theta '\mathbf {x}-\theta 'X_i}{h_1}\right) K\left( \frac{y-Y_i}{h_2}\right) . \end{aligned}$$

As to \(A_{1n}(y,\theta '\mathbf {x} )\), it is a sum of iid random variables, whose variance goes to zero when \(n\rightarrow \infty \) and \(n h_1^3 h_2\rightarrow \infty \). Moreover, the expectation of \(A_{1n}(y,\theta '\mathbf {x} )\) equals

$$\begin{aligned}&E(A_{1n}(y,\theta '\mathbf {x} ))\\&\quad =\frac{1}{f_{\theta 'X}(\theta '\mathbf {x})}\frac{1}{h_1^2 h_2}E\left( \frac{1_{\{Z\le C\}} (\mathbf {x}-X)}{1-G_{\theta }(Z-|\theta '\mathbf {x} )}K'\left( \frac{\theta '\mathbf {x}-\theta 'X}{h_1}\right) K\left( \frac{y-Z}{h_2}\right) \right) . \end{aligned}$$

Setting, \(\theta =\theta _0\), under B2, B3 and using Taylor expansion, we obtain

$$\begin{aligned} E(A_{1n}(y,\theta _0'\mathbf {x} ))&= \frac{1}{f_{\theta _0'X}(\theta _0'\mathbf {x} )}\frac{1}{h_1^2 h_2}\int \frac{1-G_{\theta _0}(z-|u)}{1-G_{\theta _0}(z-|\theta _0'\mathbf {x} )} E(\mathbf {x} -X|\theta _0' X=u)\\&K'\left( \frac{\theta _0'\mathbf {x} -u}{h_1}\right) K\left( \frac{y-z}{h_2}\right) f_{\theta _0}(z,u)dzdu\\&= \frac{\frac{d}{dt}\left\{ (\mathbf {x} -E(X|\theta _0' X=t)(1-G_{\theta _0}(y-|t))f_{\theta _0}(y,t)\right\} _{t=\theta _0'\mathbf {x} }}{f_{\theta _0'X}(\theta _0'\mathbf {x} )(1-G_{\theta _0}(y-|\theta _0'\mathbf {x} ))}+o(1)\\&= \frac{1}{f_{\theta _0'X}(\theta _0'\mathbf {x} )}\frac{d}{dt}\left\{ (\mathbf {x} -E(X|\theta _0' X=t)f_{\theta _0}(y,t)\right\} _{t=\theta _0'\mathbf {x} }\\&+\frac{\frac{d}{dt}(1-G_{\theta _0}(y-|t))_{t=\theta _0'\mathbf {x} }}{1-G_{\theta _0}(y-|\theta _0'\mathbf {x} )}(\mathbf {x} \\&-E(X|\theta _0' X=\theta _0'\mathbf {x} )f_{\theta _0}(y|\theta _0'\mathbf {x} )+o(1). \end{aligned}$$

Furthermore, using Lemmas 4 and 5 of Strzalkowska-Kominiak and Cao (2013), we have

$$\begin{aligned} \frac{d}{dt}\left\{ (\mathbf {x} -E(X|\theta _0' X=t)f_{\theta _0}(y,t)\right\} _{t=\theta _0'\mathbf {x}}=\nabla _{\theta }f_{\theta }(y,\theta '\mathbf {x} )_{\theta =\theta _0} \end{aligned}$$

and

$$\begin{aligned} \frac{\frac{d}{dt}(1-G_{\theta _0}(y-|t))_{t=\theta _0'\mathbf {x}}}{1-G_{\theta _0}(y-|\theta _0'\mathbf {x} )}(\mathbf {x} -E(X|\theta _0' X=\theta _0'\mathbf {x}))=\nabla _{\theta }\log (1-G_{\theta }(y-|\theta '\mathbf {x}))_{\theta =\theta _0}. \end{aligned}$$

Recall, that the last equation holds even if the conditional distribution function of \(C\) given \(\theta _0'X\), \(G_{\theta _0}(y|\theta _0'\mathbf {x} )\), does not follow the single-index model assumption.

Finally, we obtain

$$\begin{aligned} A_{1n}(y,\theta _0'\mathbf {x} )\rightarrow \frac{\nabla _{\theta }f_{\theta }(y,\theta '\mathbf {x})_{\theta =\theta _0}}{f_{\theta _0'X}(\theta _0'\mathbf {x})}+\nabla _{\theta }\log (1-G_{\theta }(y-|\theta '\mathbf {x}))_{\theta =\theta _0}f_{\theta _0}(y|\theta _0'\mathbf {x} ). \end{aligned}$$

Similarly, we may show that

$$\begin{aligned} A_{2n}(y,\theta _0'\mathbf {x} )\rightarrow \frac{\nabla _{\theta }f_{\theta 'X}(\theta '\mathbf {x})}{f_{\theta 'X}^2(\theta '\mathbf {x} )}f_{\theta _0}(y,\theta _0'\mathbf {x}). \end{aligned}$$

Hence

$$\begin{aligned} A_n(y,\theta _0'\mathbf {x} )\rightarrow \nabla _{\theta }f_{\theta }(y|\theta '\mathbf {x})_{\theta =\theta _0}+\nabla _{\theta }\log (1-G_{\theta }(y-|\theta '\mathbf {x}))_{\theta =\theta _0}f_{\theta _0}(y|\theta _0'\mathbf {x} ). \end{aligned}$$

As to \(B_n(y,\theta '\mathbf {x} )\), by a Taylor expansion and using Lemma 1 for the Beran estimator \(G_{n\theta }\), we obtain

$$\begin{aligned} B_n(y,\theta _0'\mathbf x)\rightarrow -\nabla _{\theta }\log (1-G_{\theta }(y-|\theta '\mathbf {x}))_{\theta =\theta _0} f_{\theta _0}(y|\theta _0'\mathbf {x} ). \end{aligned}$$

This completes the proof of a). Finally, since

$$\begin{aligned} \tilde{F}_{\theta }^S(y|\theta '\mathbf {x} )=\sum _{i=1}^n W_{in}(\theta '\mathbf {x} ) {\mathbb {K}}\left( \frac{y-Y_i}{h_2}\right) \end{aligned}$$

the proof of b) is similar.

Lemma 3

Let

$$\begin{aligned} l_n(\theta )=\frac{1}{n}\sum _{i=1}^n \biggr (\delta _i \log f_{\theta }(Y_i|\theta 'X_i)+(1-\delta _i)\log (1- F_{\theta }(Y_i|\theta 'X_i))\biggr ), \end{aligned}$$

be the theoretical log-likelihood function and \(\theta _0\) the true parameter. Then

$$\begin{aligned} \nabla _{\theta }\tilde{l}_n(\theta )_{\theta =\theta _0}- \nabla _{\theta } l_n(\theta )_{\theta =\theta _0}\rightarrow 0, \end{aligned}$$

when \(n\) goes to infinity.

Proof

We have

$$\begin{aligned} \nabla _{\theta }\tilde{l}_n(\theta )=\frac{1}{n}\sum _{i=1}^n \biggr (\delta _i \frac{\nabla _{\theta }\tilde{f}_{\theta }^{-i}(Y_i|\theta 'X_i)}{\tilde{f}_{\theta }^{-i}(Y_i|\theta 'X_i)}+(1-\delta _i)\frac{\nabla _{\theta }(1-\tilde{F}_{\theta }^{S,-i}(Y_i|\theta 'X_i))}{1-\tilde{F}_{\theta }^{S,-i}(Y_i|\theta 'X_i)}\biggr ). \end{aligned}$$

Lemma 2 complete the proof.

Similarly, we can show that

Lemma 4

Under conditions B1–B5,

$$\begin{aligned} \tilde{l}_n^{[2]}(\theta )\rightarrow l^{[2]}(\theta ), \end{aligned}$$

where \(\tilde{l}_n^{[2]}(\theta )\) and \(l^{[2]}(\theta )\) denote the Hessian matrices of \(\tilde{l}_n(\theta )\) and \(E(l_n^{[2]}(\theta ))\), respectively.

Proof of Theorem 1

Using Theorem 1 in Strzalkowska-Kominiak and Cao (2013), it is easy to prove that, under B1,

$$\begin{aligned} \theta _0=\arg \max _{\theta }E(l_n(\theta )). \end{aligned}$$

Hence using a Taylor expansion, we have

$$\begin{aligned} E(\nabla _{\theta } l_n(\theta )_{\theta =\theta _0})=0=\nabla _{\theta } \tilde{l}_n(\theta )_{\theta =\tilde{\theta }_n}=\nabla _{\theta } \tilde{l}_n(\theta )_{\theta =\theta _0}+\hat{l}_n^{[2]}(\tilde{\theta }_n^*)(\tilde{\theta }_n-\theta _0), \end{aligned}$$

where \(\hat{\theta }_n^*\) is between \(\hat{\theta }_n\) and \(\theta _0\). Furthermore,

$$\begin{aligned} \tilde{\theta }_n-\theta _0=-\left[ \tilde{l}_n^{[2]}(\tilde{\theta }_n^*)\right] ^{-1}(\nabla _{\theta } \tilde{l}_n(\theta )_{\theta =\theta _0}-E(\nabla _{\theta } l_n(\theta )_{\theta =\theta _0})). \end{aligned}$$

Finally, using Lemmas 3 and 4 and since \(\nabla _{\theta } l_n(\theta )\rightarrow E(\nabla _{\theta } l_n(\theta )_{\theta =\theta _0})\) in probability, the proof is completed.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Strzalkowska-Kominiak, E., Cao, R. Beran-based approach for single-index models under censoring. Comput Stat 29, 1243–1261 (2014). https://doi.org/10.1007/s00180-014-0489-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-014-0489-y

Keywords

Navigation