Skip to main content
Log in

M-estimation of the regression function under random left truncation and functional time series model

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

In this paper we study the M-estimation of the functional nonparametric regression when the response variable is subject to left-truncation by an other random variable. Under standard assumptions, we get the almost complete convergence rate of this robust estimate when the sample is an \(\alpha \)-mixing sequence. This approach can be applied in time series analysis to the prediction problem. Our asymptotic results are confronted by some simulations study.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Andersen PK, Borgan O, Gill RD, Keiding N (1993) Statistical models based on counting processes. Springer, New York

    Book  Google Scholar 

  • Attouch M, Laksaci A, Ould Saïd E (2010) Asymptotic normality of a robust estimator of the regression function for functional time series. J Korean Stat Soc 39:489–500

    Article  MathSciNet  Google Scholar 

  • Attouch M, Laksaci A, Ould Saïd E (2012) Robust regression for functional time series data. J Jpn Stat Soc 42:125–143

    Article  MathSciNet  Google Scholar 

  • Azzedine N, Laksaci A, Ould Saïd E (2008) On the robust nonparametric regression estimation for functional regressor. Stat Probab Lett 78:3216–3221

    Article  MathSciNet  Google Scholar 

  • Boente G, Fraiman R (1989) Nonparametric regression estimation. J Multivar Anal 29:180–198

    Article  Google Scholar 

  • Boente G, Fraiman R (1990) Asymptotic distribution of robust estimators for nonparametric models from mixing processes. Ann Stat 18:891–906

    Article  MathSciNet  Google Scholar 

  • Boente G, Gonzalez-Manteiga W, Pérez-Gonzalez A (2009) Robust nonparametric estimation with missing data. J Stat Plann Inference 139:571–592

    Article  MathSciNet  Google Scholar 

  • Bogachev VI (1999) Gaussian measures. American Mathematical Society, Providence, Math surveys and monographs, p 62

    Google Scholar 

  • Bollerslev T (1986) General autoregressive conditional heteroskedasticity. J Econ 31:307–327

    Google Scholar 

  • Bongiorno EG, Salinelli E, Goia A, Vieu P (eds) (2014) Contributions in infinite-dimensional statistics and related topics. Società editrice Esculapio, Bologna

    Google Scholar 

  • Bradley RC (2007) Introduction to strong mixing conditions, vol I–III. Kendrick Press, Utah

  • Collomb G, Härdle W (1986) Strong uniform convergence rates in robust nonparametric time series analysis and prediction: Kernel regression estimation from dependent observations. Stoch Proc Their Appl 23:77–89

    Article  MathSciNet  Google Scholar 

  • Chen J, Zhang L (2009) Asymptotic properties of nonparametric M-estimation for mixing functional data. J Stat Plann Inference 139:533–546

    Article  MathSciNet  Google Scholar 

  • Crambes C, Delsol L, Laksaci A (2008) Robust nonparametric estimation for functional data. J Nonparametr Stat 20:573–598

    Article  MathSciNet  Google Scholar 

  • Dedecker J, Doukhan P, Lang G, Leon JR, Louhichi S, Prieur C (2007) Weak dependence: with examples and applications, vol 190. Lecture notes in statistics. Springer, New York

    Book  Google Scholar 

  • Derrar S, Laksaci A, Ould Saïd E (2015) On the nonparametric estimation of the functional \(\psi \)-regression for a random left-truncation model. J Stat Theory Pract 9:823–849

    Article  MathSciNet  Google Scholar 

  • Engle RF (1982) Autoregressive conditional heteroskedasticity with estimates of the variance of U.K. inflation. Econometrica 50:987–1007

    Article  MathSciNet  Google Scholar 

  • Fan J, Hu TC, Truong YK (1994) Robust nonparametric function estimation. Scand J Stat 21:433–446

    MATH  Google Scholar 

  • Ferraty F, Vieu P (2006) Nonparametric functional data analysis. Theory and practice, Springer, New York

    MATH  Google Scholar 

  • Ferraty F, Laksaci A, Vieu P (2006) estimating some characteristics of the conditional distribution in nonparametric functional models. Stat Inference Stoch Process 9:47–76

    Article  MathSciNet  Google Scholar 

  • Gheriballah A, Laksaci A, Sekkal S (2013) Nonparametric \(M\)-regression for functional ergodic data. Stat Probab Lett 83:902–908

    Article  MathSciNet  Google Scholar 

  • He S, Yang G (1994) Estimating a lifetime distribution under different sampling plan. In: Gupta SS, Berger JO (eds) Statistical decision theory and related topics. Springer, Berlin, pp 73–85

    Chapter  Google Scholar 

  • He S, Yang G (1998) Estimation of the truncation probability in the random truncation model. Ann Stat 26:1011–1027

    Article  MathSciNet  Google Scholar 

  • Helal N, Ould Saïd E (2016) Kernel conditional quantile estimator under left truncation for functional regressors. Opuscula Math 36(1):25–48. http://dx.doi.org/10.7494/OpMath.2016.36.1.25

  • Horváth L, Kokoszka P (2012) Inference for functional data with applications. Springer, New York, p 200

    Book  Google Scholar 

  • Huber PJ (1964) Robust estimation of a location parameter. Ann Math Stat 35:73–101

    Article  MathSciNet  Google Scholar 

  • Laïb N, Ould Saïd E (2000) A robust nonparametric estimation of the autoregression function under an ergodic hypothesis. Can J Stat 28:817–828

    Article  MathSciNet  Google Scholar 

  • Li WV, Shao QM (2001) Gaussian processes: inequalities, small ball probabilities and applications. In: Rao CR, Shanbhag D (eds) Stochastic processes: theory and methods. Hanbook of statistics, vol 19. North-Holland, Amsterdam

    Google Scholar 

  • Lynden-Bell D (1971) A method of allowing for known observational selection in small samples applied to 3CR quasars. Mon Not R Astron Soc 155:95–118

    Article  Google Scholar 

  • Masry E (1986) Recursive probability density estimation for weakly dependent stationary processus. IEEE Trans Inf Theory 32:254–267

    Article  Google Scholar 

  • Ould Saïd E, Lemdani M (2006) Asymptotic properties of a nonparametric regression function estimator with randomly truncated data. Ann Inst Stat Math 58:357–378

    Article  MathSciNet  Google Scholar 

  • Ould Saïd E, Tatachak A (2009) Strong consistency rate for the kernel mode estimator under strong mixing hypothesis and left truncation. Commun Stat Theory Methods 38:1154–1169

    Article  MathSciNet  Google Scholar 

  • Ozaki T (1979) Nonlinear time series models for nonlinear random vibrations. Technical report. University of Manchester, Manchester

    Google Scholar 

  • Ramsay JO, Silverman BW (2005) Functional data analysis, 2nd edn. Springer, New York

    Book  Google Scholar 

  • Rio E (2000) Théorie asymptotique des processus aléatoires faiblement dépendants. Mathématiques & applications, vol 31. Springer, Berlin

    MATH  Google Scholar 

  • Stute W (1993) Almost sure representations of the product-limit estimator for truncated data. Ann Stat 21:146–156

    Article  MathSciNet  Google Scholar 

  • Wang JF, Liang HY (2012) Asymptotic properties for an M-estimator of the regression function with truncation and dependent data. J Korean Stat Soc 41:35–367

    Article  MathSciNet  Google Scholar 

  • Wang JF, Liang HY, Fan GL (2012) Local M-estimation of nonparametric regression with left-truncated and dependent data. Sci Sin Math 42:995–1015

    Article  Google Scholar 

  • Woodroofe M (1985) Estimating a distribution function with truncated data. Ann Stat 13:163–177

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors are grateful to the two anonymous reviewers for their particularly careful reading, relevant remarks and constructive comments, which helped them to improve the quality and the presentation of an earlier version of this paper. The second author would like to express their gratitude to King Khalid University, Saudi Arabia for providing administrative and technical support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Elias Ould Saïd.

Appendix

Appendix

For the proofs of the Theorems 1 and 2 we use the fact that \(\rho \) is a strictly convex function and continuously differentiable wrt the second component, then \(\psi \) is strictly monotone and continuous wrt the second component. We give the proof for the case of a increasing \(\psi (Y,\cdot )\), decreasing case being obtained by considering \(-\psi (Y,\cdot )\). From this, it is clear that the optimization problems (1.3) and (2.1) correspond to vanishing the functions \(\Psi (\cdot ,\cdot )\) and \({\widehat{\Psi }}(\cdot ,\cdot )\) respectively. Therefore, we can write, under this consideration, for all \(\epsilon >0\)

$$\begin{aligned} \Psi (\chi ,\theta _\chi -\epsilon )\le & {} \Psi (\chi ,\theta _\chi )=0\le \Psi (\chi ,\theta _\chi +\epsilon )\, \text{ and } \, {\widehat{\Psi }}(\chi ,\widehat{ \theta _\chi } -\epsilon )\le {\widehat{\Psi }}(\chi ,\widehat{ \theta _\chi })\\= & {} 0\le {\widehat{\Psi }}(\chi ,\widehat{ \theta _\chi } +\epsilon ). \end{aligned}$$

Hence, for all \(\epsilon >0\), we have

$$\begin{aligned} \mathbf{P}\left( |\widehat{ \theta _\chi }-\theta _\chi |\ge \epsilon \right)\le & {} \mathbf{P}\left( |{\widehat{\Psi }}(\chi ,\theta _\chi +\epsilon )-\Psi (\chi ,\theta _\chi +\epsilon )| \ge \Psi (\chi ,\theta _\chi +\epsilon )\right) \\&+\,\mathbf{P}\left( |{\widehat{\Psi }}(\chi ,\theta _\chi -\epsilon )-\Psi (\chi ,\theta _\chi -\epsilon )|\ge -\Psi (\chi ,\theta _\chi -\epsilon )\right) . \end{aligned}$$

So, it suffices to show that

$$\begin{aligned} {\widehat{\Psi }}(\chi ,t)-\Psi (\chi ,t)\rightarrow \, 0\, \quad a.s. \text{ for } \quad t := \theta _\chi \pm \epsilon . \end{aligned}$$
(5.1)

Moreover, under ((H2) (i)), we get that

$$\begin{aligned} \widehat{ \theta _\chi }-\theta _\chi =\frac{\Psi (\chi ,\widehat{ \theta _\chi })-{\widehat{\Psi }}(\chi , \widehat{ \theta _\chi })}{\Psi ^{\prime }(\chi ,\xi _n)} \end{aligned}$$

where \(\xi _n \) is between \(\widehat{\theta _\chi }\) and \(\theta _\chi \). As long as we could be able to check that

$$\begin{aligned} \exists \tau >0, \ \sum _{n=1}^{\infty } \mathbf{P}\left( \Psi ^{\prime }(\chi ,\xi _n)<\tau \right) \ < \ \infty , \end{aligned}$$
(5.2)

we would have

$$\begin{aligned} \widehat{ \theta _\chi }-\theta _\chi =O_{a.co.}\left( \sup _{t\in [\theta _\chi -\delta ,\, \theta _\chi +\delta ]} |\Psi (\chi ,t)-{\widehat{\Psi }}(\chi , t)|\right) . \end{aligned}$$

Therefore, all what is left to do, is to study the convergence rate of

$$\begin{aligned} \sup _{t\in [\theta _\chi -\delta ,\, \theta _\chi +\delta ]} |\Psi (\chi ,t)-{\widehat{\Psi }}(\chi , t)|. \end{aligned}$$

To do that, we write

$$\begin{aligned} {\widehat{\Psi }}(\chi ,t)=\frac{{\widehat{\Psi }}_N(\chi ,t)}{{\widehat{\Psi }}_D(\chi )} \end{aligned}$$

with

$$\begin{aligned} {\widehat{\Psi }}_N(\chi ,t)= & {} \frac{\tau _{n}}{n\mathbf{E}[K_{1}]}\displaystyle \sum _{i=1}^{n} \frac{1}{G_{n}(Y_{i})}K\left( \frac{d(\chi ,{\varvec{\chi }}_{i})}{h}\right) \psi (Y_{i},t),\\ {\widehat{\Psi }}_{D}(\chi )= & {} \frac{\tau _{n}}{n\mathbf{E}[K_{1}]}\displaystyle \sum _{i=1}^{n}\frac{1}{G_{n}(Y_{i})}K\left( \frac{d(\chi ,{\varvec{\chi }}_{i})}{h}\right) \end{aligned}$$

and we consider the following decomposition

$$\begin{aligned} {{\widehat{\Psi }}}(\chi , t)-\Psi (\chi ,t)= & {} \frac{1}{{{\widehat{\Psi }}}_D (\chi )} \Big [ \Big ({\widehat{\Psi }}_N (\chi ,t)- \mathbf{E}\left[ {\widehat{\Psi }}_N (\chi ,t)\right] \Big )\nonumber \\&-\,\Big (\Psi (\chi ,t)- \mathbf{E}\left[ {\widehat{\Psi }}_N (\chi ,t)\right] \Big ) \Big ]\nonumber \\&+\,\frac{\Psi (\chi ,t)}{{\widehat{\Psi }}_D (\chi )} \Big [ \mathbf{E}\left[ {\widehat{\Psi }}_D (\chi )\right] -{\widehat{\Psi }}_D(\chi ) \Big ]. \end{aligned}$$
(5.3)

Unlike to Attouch et al. (2012) we must introduce the following pseudo-estimators

$$\begin{aligned} \widetilde{\Psi }_D(\chi )=\frac{\tau }{n\mathbf{E}[K_{1}]}\displaystyle \sum _{i=1}^{n}\frac{1}{G(Y_{i})}K\left( \frac{d(\chi ,{\varvec{\chi }}_{i})}{h}\right) \end{aligned}$$

and

$$\begin{aligned} \widetilde{\Psi }_N(\chi ,t)=\frac{\tau }{n\mathbf{E}[K_{1}]}\displaystyle \sum _{i=1}^{n}\frac{1}{G(Y_{i})}K\left( \frac{d(\chi ,{\varvec{\chi }}_{i})}{h}\right) \psi (Y_{i},t). \end{aligned}$$

Moreover, we put

$$\begin{aligned} \widehat{\Psi }_N(\chi ,t)=\frac{\tau _{n}}{n\mathbf{E}[K_{1}]}\displaystyle \sum _{i=1}^{n} \frac{1}{G_{n}(Y_{i})}K\left( \frac{d(\chi ,{\varvec{\chi }}_{i})}{h}\right) \psi (Y_{i},t) \end{aligned}$$

and

$$\begin{aligned} {\widehat{\Psi }}_{D}(\chi )=\frac{\tau _{n}}{n\mathbf{E}[K_{1}]}\displaystyle \sum _{i=1}^{n}\frac{1}{G_{n}(Y_{i})}K\left( \frac{d(\chi , {\varvec{\chi }}_{i})}{h}\right) . \end{aligned}$$

Now, we consider the following decomposition

$$\begin{aligned} {\widehat{\Psi }}(\chi ,t)-\Psi (\chi ,t)= & {} \frac{\widehat{\Psi }_N(\chi ,t)}{{\widehat{\Psi }}_{D}(\chi )}-\Psi (\chi ,t)\\= & {} \frac{{\widehat{\Psi }}_N(\chi ,t)}{{\widehat{\Psi }}_{D}(\chi )}-\frac{\Psi (\chi ,t)}{{\widehat{\Psi }}_{D}(\chi )}+\frac{\Psi (\chi ,t)}{{\widehat{\Psi }}_D(\chi )}-\Psi (\chi ,t)\\= & {} \frac{1}{{\widehat{\Psi }}_{D}(\chi )}\left( {\widehat{\Psi }}_N(\chi ,t)-\Psi (\chi ,t)\right) +\frac{\Psi (\chi ,t)}{{\widehat{\Psi }}_{D}(\chi )} \left( -{\widehat{\Psi }}_{D}(\chi )+1\right) \\= & {} \frac{1}{{\widehat{\Psi }}_{D}(\chi )}\left( {\widehat{\Psi }}_N(\chi ,t)-\widetilde{\Psi }_N(\chi ,t)\right) +\frac{1}{{\widehat{\Psi }}_{D}(\chi )} \left( \widetilde{\Psi }_N(\chi ,t)\right. \\&\left. -\;\mathbf{E}\left[ \widetilde{\Psi }_N(\chi ,t)\right] \right) +\frac{1}{{\widehat{\Psi }}_{D}(\chi )}\left( \mathbf{E}\left[ \widetilde{\Psi }_N(\chi ,t)\right] -\Psi (\chi ,t)\right) \\&+\;\frac{\Psi (\chi ,t)}{{\widehat{\Psi }}_{D}(\chi )}\left\{ \left( \widetilde{\Psi }_D(\chi )-{\widehat{\Psi }}_D(\chi )\right) \right. \\&\left. +\;\left( \mathbf{E}\left[ \widetilde{\Psi }_{D}(\chi )\right] -\widetilde{\Psi }_{D}(\chi )\right) + \left( -\mathbf{E}\left[ \widetilde{\Psi }_{D}(\chi )\right] +1\right) \right\} . \end{aligned}$$

Thus, both Theorems are a consequence of the following intermediates results. The first lemma is the main point of the proof.

Lemma 1

Under Hypotheses (H1) and (H3)–(H6), we have,

$$\begin{aligned} \sup _{t\in [\theta _\chi -\delta , \theta _\chi +\delta ]}\left| \widetilde{\Psi }_N (\chi ,t)- \mathbf{E}\left[ \widetilde{\Psi }_N (\chi ,t)\right] \right| = O\left( \sqrt{\frac{\log n}{n\phi _\chi (h)}}\right) \quad a.co. \end{aligned}$$

Proof of Lemma 1

The compactness of \([\theta _\chi -\delta ,\, \theta _\chi +\delta ]\), allows to write

$$\begin{aligned}{}[\theta _\chi -\delta ,\, \theta _\chi +\delta ]\subset \bigcup _{j=1}^{d_n}\left( y_j-l_n, y_j+l_n\right) \end{aligned}$$
(5.4)

with \(\displaystyle l_n=n^{-1/2}\) and \(d_n=O\left( n^{1/2}\right) \). We put

$$\begin{aligned} {\mathcal G}_n=\left\{ y_j-l_n,y_j+l_n,1\le j\le d_n\right\} . \end{aligned}$$
(5.5)

We combine (H4) together with the monotony of \(\mathbf{E}[\widetilde{\Psi }_N (\chi ,\cdot )]\) and \(\widetilde{\Psi }_N (\chi ,\cdot )\) to write

$$\begin{aligned}&\sup _{t\in [\theta _\chi -\delta ,\, \theta _\chi +\delta ]}\left| \widetilde{\Psi }_N (\chi ,t)\right. \nonumber \\&\quad \left. -\,\mathbf{E}\left[ \widetilde{\Psi }_N (\chi ,t)\right] \right| \le \max _{1\le j\le d_n} \max _{z\in \{y_j-l_n, y_j+l_n\}}\left| \widetilde{\Psi }_N (\chi ,z)\right. \nonumber \\&\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \left. -\, \mathbf{E}\left[ \widetilde{\Psi }_N (\chi ,z)\right] \right| +2Cl_n. \end{aligned}$$
(5.6)

On the one hand condition (H6) implies that

$$\begin{aligned} l_n=o\left( \sqrt{\frac{\log n}{n\phi _\chi (h)}}\right) . \end{aligned}$$
(5.7)

On the other hand we have, for any \(\varepsilon >0\)

$$\begin{aligned}&\mathbf{P}\left( \max _{z\in {\mathcal G}_n}\left| \widetilde{\Psi }_N (\chi ,z)-\mathbf{E}\left[ \widetilde{\Psi }_N (\chi ,z)\right] \right|>\varepsilon \right) \nonumber \\&\quad \le \sum _{z\in {\mathcal G}_n} \mathbf{P}\left( \left| \widetilde{\Psi }_N (\chi ,z)-\mathbf{E}\left[ \widetilde{\Psi }_N(\chi ,z)\right] \right| > \varepsilon \right) . \end{aligned}$$
(5.8)

Now, using truncation method to prove our result for general case where \(\psi \) is not necessary bounded. Indeed, we consider the following random variable

$$\begin{aligned} \widetilde{\Psi }_N^*(\chi ,t)= \frac{1}{n\mathbf{E}[K(h^{-1}d(\chi ,{\varvec{\chi }}_1))]}\sum _{i=1}^nK(h^{-1}d(\chi ,{\varvec{\chi }}_i))\frac{\tau }{G(Y_{i})}\psi ^*(Y_i,t) \end{aligned}$$

where \(\psi ^*(\cdot ,t)=\psi (\cdot ,t)\mathbbm {1}_{(\psi (\cdot ,t)< \gamma _n)}\) with \(\gamma _n=n^{a/p}\). So, the claimed result is a consequence of the three intermediates results.

$$\begin{aligned}&\max _{z\in {\mathcal G}_n} \left| \mathbf{E}[\widetilde{\Psi }_N^* (\chi ,z)]- \mathbf{E}[{\widehat{\Psi }} _N (\chi ,z)]\right| = o\left( \sqrt{\frac{\log n}{n\phi _\chi (h)}}\right) , \end{aligned}$$
(5.9)
$$\begin{aligned}&\sum _nd_n\max _{z\in {\mathcal G}_n}{} \mathbf{P}\left( \left| \widetilde{\Psi }_N^* (\chi ,z)- \widetilde{\Psi }_N (\chi ,z)\right|>\epsilon _0\left( \sqrt{\frac{\log n}{n\phi _\chi (h)}}\right) \right) \nonumber \\&\quad <\infty \quad \text{ for } \text{ some } \epsilon _0>0 \end{aligned}$$
(5.10)

and

$$\begin{aligned} \sum _nd_n\max _{z\in {\mathcal G}_n}{} \mathbf{P}\left( \left| \widetilde{\Psi }_N^* (\chi ,z)-\mathbf{E}[\widetilde{\Psi }_N^* (\chi ,z)] \right| >\epsilon _0\left( \sqrt{\frac{\log n}{n\phi _\chi (h)}}\right) \right) <\infty .\quad \end{aligned}$$
(5.11)

\(\square \)

Proof of (5.9)

It clear that, for all \(z\in {\mathcal G}_n\) we can write

$$\begin{aligned}&\left| \mathbf{E}[\widetilde{\Psi }_N^* (\chi ,z)]- \mathbf{E}[\widetilde{\Psi }_N (\chi ,z)]\right| \\&\quad \le C\frac{1}{\phi _\chi (h)}\mathbf{E}\left[ \left| \psi (Y,z)\right| \mathbbm {1}_{\{\psi (Y,z)\ge \gamma _n\}}K(h^{-1}d(\chi ,X))\right] . \end{aligned}$$

Applying Holder inequality, for \(\kappa =\frac{p}{2}\) with \(\zeta \) such that \( \frac{1}{\kappa }+\frac{1}{\zeta }=1\), to write that, for all \(z\in {\mathcal G}_n\)

$$\begin{aligned}&\mathbf{E}\left[ \left| \psi (Y,z)\right| \mathbbm {1}_{\{\psi (Y,z)\ge \gamma _n\}}K(h^{-1}d(\chi ,{\varvec{\chi }}_1))\right] \\&\quad \le \mathbf{E}^{1/ \kappa }\left[ \left| \psi ^{\kappa }(Y,z)\right| \mathbbm {1}_{\{\psi (Y,z)\ge \gamma _n\}}\right] \mathbf{E}^{1/ \zeta }\left[ K^{\zeta }(h^{-1}d(\chi ,{\varvec{\chi }}_1))\right] \\&\quad \le \gamma _n^{-1} \mathbf{E}^{1/ \kappa }\left[ \left| \psi ^{2\kappa }(Y,z)\right| \right] \mathbf{E}^{1/ \zeta }\left[ K^{\zeta }(h^{-1}d(\chi ,{\varvec{\chi }}_1))\right] \\&\quad \le C\gamma _n^{-1}\phi _\chi ^{1/ \zeta }(h). \end{aligned}$$

Thus,

$$\begin{aligned} \max _{z\in {\mathcal G}_n} \left| \widetilde{\Psi }_N^* (\chi ,z)-\mathbf{E}[\widetilde{\Psi }_N^* (\chi ,z)] \right| \le n^{-a/p}\phi _\chi ^{(1-\zeta )/\zeta }. \end{aligned}$$

Therefore, (5.9) is consequence of the fact that \(a>p>2\).

Proof of (5.10)

To do so, we use the Markov’s inequality \(\forall z\in {\mathcal G}_n\), \(\forall \epsilon >0\)

$$\begin{aligned}&\mathbf{P}\left( \left| \widetilde{\Psi }_N^* (\chi ,z)-\widetilde{\Psi }_N(\chi ,z)\right|> \epsilon \right) \\&\quad \le \sum _{i=1}^n \mathbf{P}\left( \psi (Y_i,z)>n^{a/p}\right) \\&\quad \le n^{1-a}{} \mathbf{E}\left[ \psi ^p(Y,z)\right] .\\ \end{aligned}$$

In particular, for \(\epsilon =\epsilon _0\left( \sqrt{\frac{\log n}{n\phi _\chi (h)}}\right) \) and thanks to \( a>4 \), we have

$$\begin{aligned} d_n\max _{z\in {\mathcal G}_n}{} \mathbf{P}\left( |\widetilde{\Psi }_N (\chi ,z)- \widetilde{\Psi }_N^* (\chi ,z)|>\epsilon _0\left( \sqrt{\frac{\log n}{n\phi _\chi (h)}}\right) \right) \le n^{3/2-a}< Cn^{-1-\nu }. \end{aligned}$$

Proof of (5.11)

We define, for all \(z\in {\mathcal G}_n\),

$$\begin{aligned} \Lambda _i(z)=K_i (\chi )\psi ^*(Y_i,z)\frac{\tau }{G(Y_{i})}-\mathbf{E}\left[ K_1(\chi )\frac{\tau }{G(Y_{i})}\psi ^*(Y_i,z)\right] . \end{aligned}$$

Therefore, for all \(\epsilon >0\)

$$\begin{aligned} \mathbf{P}\left\{ \left| \widetilde{\Psi }_N^* (\chi ,z)-\mathbf{E}\left[ \widetilde{\Psi }_N^* (\chi ,z)\right] \right|>\varepsilon \right\}= & {} \mathbf{P}\left\{ \left| \sum _{i=1}^{n}{\Lambda _{i}(z)}\right| >\varepsilon n\mathbf{E}[K_1(\chi )]\right\} . \end{aligned}$$

Now the evaluation of the last quantity is based on the asymptotic behavior of the following quantity

$$\begin{aligned} S_{n}^{'2}=\displaystyle \sum _{i=1}^{n}\displaystyle \sum _{j=1}^n Cov(\Lambda _i(z),\Lambda _j(z))=\displaystyle \sum _{i=1}^{n}\displaystyle \sum _{i\ne j} Cov(\Lambda _{i}(z),\Lambda _{j}(z))+n Var[\Lambda _1(z)]. \end{aligned}$$

For this term we use the same technique developed by Masry (1986) by spliting the sum into two sets defined by

$$\begin{aligned} S'_1=\{ (i,j)\;\ \text{ such } \text{ that }\; \ 1\le i-j\le u_n\} \end{aligned}$$

and

$$\begin{aligned} S'_2=\{ (i,j)\; \ \text{ such } \text{ that }\;\ u_n+1\le i-j\le n-1\}. \end{aligned}$$

We note by \(J'_{1,n}\) and \(J'_{2,n}\) be the sum of covariance over \(S'_1\) and \(S'_2\) respectively. On \(S'_1\), we have, under (H4)

$$\begin{aligned} J'_{1,n}\le & {} C\sum _{S'_1} \left| \mathbf{E}\left[ K_i(\chi )K_j(\chi )\right] \right| +\left| \mathbf{E}\left[ K_i(\chi )\right] \mathbf{E} \left[ K_j(\chi ) \right] \right| . \end{aligned}$$

Because of (H1), (H5) and (H6) we have

$$\begin{aligned} J'_{1,n} \le C n u_n \left( \frac{\phi _\chi (h)}{n^{2/p}}\right) ^{a/(a-1)}. \end{aligned}$$

Concerning \(J'_{2,n}\), we use again Davydov–Rio’s inequality in the \(L^\infty \) cases and we obtain

$$\begin{aligned} |Cov(\Lambda _i(z), \Lambda _j(z))|\le C\gamma _n^2\alpha (|i-j|). \end{aligned}$$

Therefore,

$$\begin{aligned} J'_{2,n}=\sum _{S'_2}|Cov(\Lambda _i(z), \Lambda _j(z))|\le \frac{n\gamma _n^2u_n^{-a+1}}{a-1}. \end{aligned}$$

Taking \(u_n=\left( \frac{\gamma _n^2}{\left( \frac{\phi _\chi (h)}{n^{2/p}}\right) ^{a/(a-1)}}\right) ^{1/a}\), we prove that

$$\begin{aligned} \sum _{i=1}^{n}\displaystyle \sum _{i\ne j} Cov(\Lambda _{i}(z),\Lambda _{j}(z))=O(n\phi _\chi (h)). \end{aligned}$$

On the other hand, under (H3), we have

$$\begin{aligned} Var(\Lambda _1(z))\le \mathbf{E}\left[ K_i(\chi )\psi ^*(Y_i,z)\right] ^2\le \mathbf{E}\left[ K_i(\chi )\psi (Y_i,z)\right] ^2=O(\phi _\chi (h)). \end{aligned}$$

Under (H6) we have

$$\begin{aligned} S_{n}^{'2}=O(n\phi _\chi (h)). \end{aligned}$$
(5.12)

Now, we apply Fuck–Nagaev’s inequality for \(\Lambda _{i}(z)\). We have, for all \(\ell >0\) and for all \(\varepsilon >0\),

$$\begin{aligned} \mathbf{P}\left\{ \left| \mathbf{E}\left[ {\widehat{\Psi }}_{N}^*(\chi ,z)\right] -{\widehat{\Psi }}_{N}^*(\chi ,z)\right|>\varepsilon \right\}\le & {} \mathbf{P}\left\{ \left| \sum _{i=1}^{n}{\Lambda _{i}}(z)\right| > \varepsilon n\mathbf{E}[K_1(\chi )]\right\} \\\le & {} C (A'_1(\chi )+A'_2(\chi )) \end{aligned}$$

where

$$\begin{aligned} A'_1= \left( 1+\frac{\varepsilon ^{2}n^{2}(\mathbf{E}[K_1(\chi )])^{2}}{S_{n}^{'2}\ell }\right) ^{-\ell /2}\, \text{ and } \; A'_2=n\ell ^{-1}\left( \frac{\ell }{\varepsilon n\mathbf{E}[K_1(\chi )]}\right) ^{a+1}. \end{aligned}$$

In particular, for \( \varepsilon =\epsilon _0\frac{\sqrt{n\log n \phi _\chi (h)}}{n\mathbf{E}[K_1(\chi )]}\) and \(\ell =C(\log n)^2\), we obtain by (H6)

$$\begin{aligned} d_nA'_2\le C n^{3/2-(a+1)/2}\phi _\chi (h)^{-(a+1)/2}(\log n)^{(3a-1)/2}\le Cn^{-1-\nu '_1} \; \end{aligned}$$
(5.13)

for some \(\nu '_1>0\) and

$$\begin{aligned} d_nA'_1\le C\left( 1+\frac{\epsilon _0^2\log n}{\ell }\right) ^{-\ell /2}\le Cn^{-1-\nu '_2} \quad \text{ for } \text{ some }\quad \nu '_2>0. \end{aligned}$$
(5.14)

From (5.13) and (5.14) we obtain

$$\begin{aligned} \sum _nd_n\max _{z\in {\mathcal G}_n}{} \mathbf{P}\left( \left| \widetilde{\Psi }_N^* (\chi ,z)-\mathbf{E}[\widetilde{\Psi }_N^* (\chi ,z)] \right| >\epsilon _0\left( \sqrt{\frac{\log n}{n\phi _\chi (h)}}\right) \right) <\infty . \end{aligned}$$

Lemma 2

(Derrar et al. 2015) Under Hypotheses (H1), (H2)((i), (iii)) and (H5), we have,

$$\begin{aligned} \sup _{t\in [\theta _\chi -\delta , \theta _\chi +\delta ]}\left| \mathbf{E}\left[ \widetilde{\Psi }_N (\chi ,t)\right] - \tau \Psi (\chi ,t)\right| = o(1). \end{aligned}$$

Furthermore if we add (H2)(ii), we have

$$\begin{aligned} \sup _{t\in [\theta _\chi -\delta , \theta _\chi +\delta ]}\left| \mathbf{E}\left[ \widetilde{\Psi }_N (\chi ,t)\right] - \tau \Psi (\chi ,t)\right| =O\left( h^{b_1}\right) . \end{aligned}$$

Lemma 3

Under Hypotheses (H1) and (H3)–(H5), we have,

$$\begin{aligned} \sup _{t\in [\theta _\chi -\delta , \theta _\chi +\delta ]}\left| {\widehat{\Psi }}_N (\chi ,t)- \widetilde{\Psi }_N (\chi ,t)\right| =\displaystyle O\left( \sqrt{\frac{1}{n}}\right) \quad a.s. \end{aligned}$$

Proof of Lemma  3

By a simple algebra we write for all \(t \in [\theta _\chi -\delta , \theta _\chi +\delta ]\)

$$\begin{aligned}&\left| {\widehat{\Psi }}_N (\chi ,t)- \widetilde{\Psi }_N (\chi ,t)\right| \\&\quad = \displaystyle \left| \frac{\tau _{n}-\tau }{n\mathbf{E}[K_{1}]}\displaystyle \sum _{i=1}^{n} \frac{1}{G_{n}(Y_{i})}K\left( \frac{d(\chi ,{\varvec{\chi }}_{i})}{h}\right) \psi (Y_{i},t)\right. \\&\qquad + \left. \displaystyle \;\tau \,\left( \frac{G_{n}(Y)-G(Y)}{G_{n}(Y)G(Y)}\right) \frac{1}{n\mathbf{E}[K_{1}]} \displaystyle \sum _{i=1}^{n}K\left( \frac{d(\chi ,{\varvec{\chi }}_{i})}{h}\right) \psi (Y_{i},t)\right| \\&\quad \le \displaystyle \left[ \frac{\big |\tau _{n}-\tau \big |}{G_{n}(a_{F})}+\tau \frac{\sup _{t\in [a,\;b]}\big | G_{n}(t)-G(t)\big |}{G _{n}(a_{F})G(a_{F})}\right] \\&\qquad \times \, \displaystyle \frac{1}{n\mathbf{E}[K_{1}]}\displaystyle \sum _{i=1}^{n}K\left( \frac{d(\chi ,{\varvec{\chi }}_{i})}{h}\right) \psi (Y_{i},t). \end{aligned}$$

Using the result of Ould Saïd and Tatachak (2009) for which we have \(\big |\tau _{n}-\tau \big |=O_{a.s.}(n^{-\frac{1}{2}})\) and the remark 6 of Woodroofe 1985) where we obtain \(|G_{n}(a_{F})-G(a_{F})|=O_{a.s.}(n^{-\frac{1}{2}})\). Furthermore, for the second part, we recall that this term has been studied by Attouch et al. (2012) where we show that

$$\begin{aligned}&\sup _{t\in [\theta _\chi -\delta , \theta _\chi +\delta ]}\frac{1}{n\mathbf{E}[K_{1}]}\displaystyle \sum _{i=1}^{n}\left[ K\left( \frac{d(\chi ,{\varvec{\chi }}_{i})}{h}\right) \psi (Y_{i},t)\right. \\&\quad \left. -\;\mathbf{E}\left[ K\left( \frac{d(\chi ,{\varvec{\chi }}_{i})}{h}\right) \psi (Y_{i},t)\right] \right] =o\left( 1\right) \end{aligned}$$

It follows that

$$\begin{aligned} \sup _{t\in [\theta _\chi -\delta , \theta _\chi +\delta ]}\frac{1}{n\mathbf{E}[K_{1}]}\displaystyle \sum _{i=1}^{n}K\left( \frac{d(\chi ,{\varvec{\chi }}_{i})}{h}\right) \psi (Y_{i},t)=O_{a.s.}(1). \end{aligned}$$

Thus, we have

$$\begin{aligned} \left| {\widehat{\Psi }}_N (\chi ,t)- \widetilde{\Psi }_N (\chi ,t)\right| =\displaystyle O_{a.s.}\Big (n^{-\frac{1}{2}}\Big ) \end{aligned}$$

which completes the proof of Lemma 3.

Lemma 4

Under Hypotheses (H1), (H4)–(H6), we have,

$$\begin{aligned} \widetilde{\Psi }_{D}(\chi )-\mathbf{E}\left[ \widetilde{\Psi }_{D}(\chi )\right] = O\left( \sqrt{\frac{\log n}{n\phi _\chi (h)}}\right) \quad a.co. \end{aligned}$$

Proof of Lemma 4

The proof is similar to Lemma 1 by replacing the function \(\psi \) by 1. \(\square \)

Lemma 5

(Derrar et al. 2015) Under Hypotheses (H1), (H4) and (H5), we have,

$$\begin{aligned} \mathbf{E}\left[ \widetilde{\Psi }_{D}(\chi )\right] =1. \end{aligned}$$

Lemma 6

Under Hypotheses (H1), (H4) and (H5), we have,

$$\begin{aligned} {\widehat{\Psi }}_{D}(\chi )-\widetilde{\Psi }_{D}(\chi )= O_{a.s.}\left( \sqrt{\frac{1}{n}}\right) .\quad \end{aligned}$$

Proof of Lemma 6

Again the proof is similar to Lemma 3 by replacing \(\psi \) by 1. \(\square \)

Corollary 1

Under Assumptions (H1), (H2) ((i)–(ii)), (H3)–(H5) and if \(\Psi ^{\prime }(\chi ,\theta _\chi )\not =0\), then \(\widehat{\theta }_\chi \) exists a.s. for all sufficiently large n such that

$$\begin{aligned} \exists C>0 \quad \Psi ^\prime (\chi ,\xi _n)> C \end{aligned}$$

where \(\xi _n\) is between \(\theta _\chi \) and \(\widehat{\theta }_\chi \).

Proof of Corollary 1

It is clear that, if \(\psi _\chi (Y,.)\) is increasing, we have for all \(\epsilon >0\)

$$\begin{aligned} \Psi (\chi , \theta _\chi -\epsilon )\le \Psi (\chi ,\theta _\chi )\le \Psi (\chi , \theta _\chi +\epsilon ). \end{aligned}$$

\(\square \)

The results of Lemmas 16 show that

$$\begin{aligned} {\widehat{\Psi }}(\chi ,t)\longrightarrow \Psi (\chi ,t) \; \; \hbox {a. s.} \end{aligned}$$

for all real fixed \(t\in [\theta _\chi -\delta ,\theta _\chi +\delta ]\). So, for sufficiently large n and for all \(\epsilon \le \delta \)

$$\begin{aligned} {\widehat{\Psi }}(\chi ,\theta _\chi -\epsilon )\le 0\le {\widehat{\Psi }}(\chi ,\theta _\chi +\epsilon ) \quad \text{ a. } \text{ s. }. \end{aligned}$$

Since \({\widehat{\Psi }}(\chi ,t)\) is a continuous function of t, there exists a \(\widehat{\theta _\chi }\in [\theta _\chi -\epsilon ,\theta _\chi +\epsilon ]\) such that \( {\widehat{\Psi }}(\chi ,\widehat{\theta _\chi })=0. \)

Concerning, the uniqueness of \( \widehat{\theta _\chi } \), we point out that the latter is a direct consequence of the strict monotonicity of \(\psi _\chi \), wrt the second component and the positivity of K.

Finally, the second part of this corollary is a direct consequence of the regularity assumption (H2) (i) on \(\Psi (\chi ,\cdot )\) and the convergence

$$\begin{aligned} \widehat{\theta _\chi } \rightarrow \theta _\chi \;\; \hbox {a.s.} \;\; \hbox {as}\;\;\; n \longrightarrow \infty . \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Derrar, S., Laksaci, A. & Saïd, E.O. M-estimation of the regression function under random left truncation and functional time series model. Stat Papers 61, 1181–1202 (2020). https://doi.org/10.1007/s00362-018-0979-z

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-018-0979-z

Keywords

Mathematics Subject Classification

Navigation