Abstract
In this paper we study the M-estimation of the functional nonparametric regression when the response variable is subject to left-truncation by an other random variable. Under standard assumptions, we get the almost complete convergence rate of this robust estimate when the sample is an \(\alpha \)-mixing sequence. This approach can be applied in time series analysis to the prediction problem. Our asymptotic results are confronted by some simulations study.
Similar content being viewed by others
References
Andersen PK, Borgan O, Gill RD, Keiding N (1993) Statistical models based on counting processes. Springer, New York
Attouch M, Laksaci A, Ould Saïd E (2010) Asymptotic normality of a robust estimator of the regression function for functional time series. J Korean Stat Soc 39:489–500
Attouch M, Laksaci A, Ould Saïd E (2012) Robust regression for functional time series data. J Jpn Stat Soc 42:125–143
Azzedine N, Laksaci A, Ould Saïd E (2008) On the robust nonparametric regression estimation for functional regressor. Stat Probab Lett 78:3216–3221
Boente G, Fraiman R (1989) Nonparametric regression estimation. J Multivar Anal 29:180–198
Boente G, Fraiman R (1990) Asymptotic distribution of robust estimators for nonparametric models from mixing processes. Ann Stat 18:891–906
Boente G, Gonzalez-Manteiga W, Pérez-Gonzalez A (2009) Robust nonparametric estimation with missing data. J Stat Plann Inference 139:571–592
Bogachev VI (1999) Gaussian measures. American Mathematical Society, Providence, Math surveys and monographs, p 62
Bollerslev T (1986) General autoregressive conditional heteroskedasticity. J Econ 31:307–327
Bongiorno EG, Salinelli E, Goia A, Vieu P (eds) (2014) Contributions in infinite-dimensional statistics and related topics. Società editrice Esculapio, Bologna
Bradley RC (2007) Introduction to strong mixing conditions, vol I–III. Kendrick Press, Utah
Collomb G, Härdle W (1986) Strong uniform convergence rates in robust nonparametric time series analysis and prediction: Kernel regression estimation from dependent observations. Stoch Proc Their Appl 23:77–89
Chen J, Zhang L (2009) Asymptotic properties of nonparametric M-estimation for mixing functional data. J Stat Plann Inference 139:533–546
Crambes C, Delsol L, Laksaci A (2008) Robust nonparametric estimation for functional data. J Nonparametr Stat 20:573–598
Dedecker J, Doukhan P, Lang G, Leon JR, Louhichi S, Prieur C (2007) Weak dependence: with examples and applications, vol 190. Lecture notes in statistics. Springer, New York
Derrar S, Laksaci A, Ould Saïd E (2015) On the nonparametric estimation of the functional \(\psi \)-regression for a random left-truncation model. J Stat Theory Pract 9:823–849
Engle RF (1982) Autoregressive conditional heteroskedasticity with estimates of the variance of U.K. inflation. Econometrica 50:987–1007
Fan J, Hu TC, Truong YK (1994) Robust nonparametric function estimation. Scand J Stat 21:433–446
Ferraty F, Vieu P (2006) Nonparametric functional data analysis. Theory and practice, Springer, New York
Ferraty F, Laksaci A, Vieu P (2006) estimating some characteristics of the conditional distribution in nonparametric functional models. Stat Inference Stoch Process 9:47–76
Gheriballah A, Laksaci A, Sekkal S (2013) Nonparametric \(M\)-regression for functional ergodic data. Stat Probab Lett 83:902–908
He S, Yang G (1994) Estimating a lifetime distribution under different sampling plan. In: Gupta SS, Berger JO (eds) Statistical decision theory and related topics. Springer, Berlin, pp 73–85
He S, Yang G (1998) Estimation of the truncation probability in the random truncation model. Ann Stat 26:1011–1027
Helal N, Ould Saïd E (2016) Kernel conditional quantile estimator under left truncation for functional regressors. Opuscula Math 36(1):25–48. http://dx.doi.org/10.7494/OpMath.2016.36.1.25
Horváth L, Kokoszka P (2012) Inference for functional data with applications. Springer, New York, p 200
Huber PJ (1964) Robust estimation of a location parameter. Ann Math Stat 35:73–101
Laïb N, Ould Saïd E (2000) A robust nonparametric estimation of the autoregression function under an ergodic hypothesis. Can J Stat 28:817–828
Li WV, Shao QM (2001) Gaussian processes: inequalities, small ball probabilities and applications. In: Rao CR, Shanbhag D (eds) Stochastic processes: theory and methods. Hanbook of statistics, vol 19. North-Holland, Amsterdam
Lynden-Bell D (1971) A method of allowing for known observational selection in small samples applied to 3CR quasars. Mon Not R Astron Soc 155:95–118
Masry E (1986) Recursive probability density estimation for weakly dependent stationary processus. IEEE Trans Inf Theory 32:254–267
Ould Saïd E, Lemdani M (2006) Asymptotic properties of a nonparametric regression function estimator with randomly truncated data. Ann Inst Stat Math 58:357–378
Ould Saïd E, Tatachak A (2009) Strong consistency rate for the kernel mode estimator under strong mixing hypothesis and left truncation. Commun Stat Theory Methods 38:1154–1169
Ozaki T (1979) Nonlinear time series models for nonlinear random vibrations. Technical report. University of Manchester, Manchester
Ramsay JO, Silverman BW (2005) Functional data analysis, 2nd edn. Springer, New York
Rio E (2000) Théorie asymptotique des processus aléatoires faiblement dépendants. Mathématiques & applications, vol 31. Springer, Berlin
Stute W (1993) Almost sure representations of the product-limit estimator for truncated data. Ann Stat 21:146–156
Wang JF, Liang HY (2012) Asymptotic properties for an M-estimator of the regression function with truncation and dependent data. J Korean Stat Soc 41:35–367
Wang JF, Liang HY, Fan GL (2012) Local M-estimation of nonparametric regression with left-truncated and dependent data. Sci Sin Math 42:995–1015
Woodroofe M (1985) Estimating a distribution function with truncated data. Ann Stat 13:163–177
Acknowledgements
The authors are grateful to the two anonymous reviewers for their particularly careful reading, relevant remarks and constructive comments, which helped them to improve the quality and the presentation of an earlier version of this paper. The second author would like to express their gratitude to King Khalid University, Saudi Arabia for providing administrative and technical support.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
For the proofs of the Theorems 1 and 2 we use the fact that \(\rho \) is a strictly convex function and continuously differentiable wrt the second component, then \(\psi \) is strictly monotone and continuous wrt the second component. We give the proof for the case of a increasing \(\psi (Y,\cdot )\), decreasing case being obtained by considering \(-\psi (Y,\cdot )\). From this, it is clear that the optimization problems (1.3) and (2.1) correspond to vanishing the functions \(\Psi (\cdot ,\cdot )\) and \({\widehat{\Psi }}(\cdot ,\cdot )\) respectively. Therefore, we can write, under this consideration, for all \(\epsilon >0\)
Hence, for all \(\epsilon >0\), we have
So, it suffices to show that
Moreover, under ((H2) (i)), we get that
where \(\xi _n \) is between \(\widehat{\theta _\chi }\) and \(\theta _\chi \). As long as we could be able to check that
we would have
Therefore, all what is left to do, is to study the convergence rate of
To do that, we write
with
and we consider the following decomposition
Unlike to Attouch et al. (2012) we must introduce the following pseudo-estimators
and
Moreover, we put
and
Now, we consider the following decomposition
Thus, both Theorems are a consequence of the following intermediates results. The first lemma is the main point of the proof.
Lemma 1
Under Hypotheses (H1) and (H3)–(H6), we have,
Proof of Lemma 1
The compactness of \([\theta _\chi -\delta ,\, \theta _\chi +\delta ]\), allows to write
with \(\displaystyle l_n=n^{-1/2}\) and \(d_n=O\left( n^{1/2}\right) \). We put
We combine (H4) together with the monotony of \(\mathbf{E}[\widetilde{\Psi }_N (\chi ,\cdot )]\) and \(\widetilde{\Psi }_N (\chi ,\cdot )\) to write
On the one hand condition (H6) implies that
On the other hand we have, for any \(\varepsilon >0\)
Now, using truncation method to prove our result for general case where \(\psi \) is not necessary bounded. Indeed, we consider the following random variable
where \(\psi ^*(\cdot ,t)=\psi (\cdot ,t)\mathbbm {1}_{(\psi (\cdot ,t)< \gamma _n)}\) with \(\gamma _n=n^{a/p}\). So, the claimed result is a consequence of the three intermediates results.
and
\(\square \)
Proof of (5.9)
It clear that, for all \(z\in {\mathcal G}_n\) we can write
Applying Holder inequality, for \(\kappa =\frac{p}{2}\) with \(\zeta \) such that \( \frac{1}{\kappa }+\frac{1}{\zeta }=1\), to write that, for all \(z\in {\mathcal G}_n\)
Thus,
Therefore, (5.9) is consequence of the fact that \(a>p>2\).
Proof of (5.10)
To do so, we use the Markov’s inequality \(\forall z\in {\mathcal G}_n\), \(\forall \epsilon >0\)
In particular, for \(\epsilon =\epsilon _0\left( \sqrt{\frac{\log n}{n\phi _\chi (h)}}\right) \) and thanks to \( a>4 \), we have
Proof of (5.11)
We define, for all \(z\in {\mathcal G}_n\),
Therefore, for all \(\epsilon >0\)
Now the evaluation of the last quantity is based on the asymptotic behavior of the following quantity
For this term we use the same technique developed by Masry (1986) by spliting the sum into two sets defined by
and
We note by \(J'_{1,n}\) and \(J'_{2,n}\) be the sum of covariance over \(S'_1\) and \(S'_2\) respectively. On \(S'_1\), we have, under (H4)
Because of (H1), (H5) and (H6) we have
Concerning \(J'_{2,n}\), we use again Davydov–Rio’s inequality in the \(L^\infty \) cases and we obtain
Therefore,
Taking \(u_n=\left( \frac{\gamma _n^2}{\left( \frac{\phi _\chi (h)}{n^{2/p}}\right) ^{a/(a-1)}}\right) ^{1/a}\), we prove that
On the other hand, under (H3), we have
Under (H6) we have
Now, we apply Fuck–Nagaev’s inequality for \(\Lambda _{i}(z)\). We have, for all \(\ell >0\) and for all \(\varepsilon >0\),
where
In particular, for \( \varepsilon =\epsilon _0\frac{\sqrt{n\log n \phi _\chi (h)}}{n\mathbf{E}[K_1(\chi )]}\) and \(\ell =C(\log n)^2\), we obtain by (H6)
for some \(\nu '_1>0\) and
From (5.13) and (5.14) we obtain
Lemma 2
(Derrar et al. 2015) Under Hypotheses (H1), (H2)((i), (iii)) and (H5), we have,
Furthermore if we add (H2)(ii), we have
Lemma 3
Under Hypotheses (H1) and (H3)–(H5), we have,
Proof of Lemma 3
By a simple algebra we write for all \(t \in [\theta _\chi -\delta , \theta _\chi +\delta ]\)
Using the result of Ould Saïd and Tatachak (2009) for which we have \(\big |\tau _{n}-\tau \big |=O_{a.s.}(n^{-\frac{1}{2}})\) and the remark 6 of Woodroofe 1985) where we obtain \(|G_{n}(a_{F})-G(a_{F})|=O_{a.s.}(n^{-\frac{1}{2}})\). Furthermore, for the second part, we recall that this term has been studied by Attouch et al. (2012) where we show that
It follows that
Thus, we have
which completes the proof of Lemma 3.
Lemma 4
Under Hypotheses (H1), (H4)–(H6), we have,
Proof of Lemma 4
The proof is similar to Lemma 1 by replacing the function \(\psi \) by 1. \(\square \)
Lemma 5
(Derrar et al. 2015) Under Hypotheses (H1), (H4) and (H5), we have,
Lemma 6
Under Hypotheses (H1), (H4) and (H5), we have,
Proof of Lemma 6
Again the proof is similar to Lemma 3 by replacing \(\psi \) by 1. \(\square \)
Corollary 1
Under Assumptions (H1), (H2) ((i)–(ii)), (H3)–(H5) and if \(\Psi ^{\prime }(\chi ,\theta _\chi )\not =0\), then \(\widehat{\theta }_\chi \) exists a.s. for all sufficiently large n such that
where \(\xi _n\) is between \(\theta _\chi \) and \(\widehat{\theta }_\chi \).
Proof of Corollary 1
It is clear that, if \(\psi _\chi (Y,.)\) is increasing, we have for all \(\epsilon >0\)
\(\square \)
The results of Lemmas 1–6 show that
for all real fixed \(t\in [\theta _\chi -\delta ,\theta _\chi +\delta ]\). So, for sufficiently large n and for all \(\epsilon \le \delta \)
Since \({\widehat{\Psi }}(\chi ,t)\) is a continuous function of t, there exists a \(\widehat{\theta _\chi }\in [\theta _\chi -\epsilon ,\theta _\chi +\epsilon ]\) such that \( {\widehat{\Psi }}(\chi ,\widehat{\theta _\chi })=0. \)
Concerning, the uniqueness of \( \widehat{\theta _\chi } \), we point out that the latter is a direct consequence of the strict monotonicity of \(\psi _\chi \), wrt the second component and the positivity of K.
Finally, the second part of this corollary is a direct consequence of the regularity assumption (H2) (i) on \(\Psi (\chi ,\cdot )\) and the convergence
Rights and permissions
About this article
Cite this article
Derrar, S., Laksaci, A. & Saïd, E.O. M-estimation of the regression function under random left truncation and functional time series model. Stat Papers 61, 1181–1202 (2020). https://doi.org/10.1007/s00362-018-0979-z
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-018-0979-z
Keywords
- Asymptotic normality
- Functional data
- Kernel estimator
- Lynden-Bell estimator
- Robust estimation
- Small balls probabilities
- Strong consistency
- Truncated data