Skip to main content
Log in

Local linear estimation of the conditional cumulative distribution function: Censored functional data case

  • Published:
Sankhya A Aims and scope Submit manuscript

A Correction to this article was published on 08 March 2022

This article has been updated

Abstract

In this paper, we estimate the conditional cumulative distribution function of a randomly censored scalar response variable given a functional random variable using the local linear approach. Under this structure, we state the asymptotic normality with explicit rates of the constructed estimator. Moreover, the usefulness of our results is illustrated through a simulated study.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5

Similar content being viewed by others

Change history

Notes

  1. Let (xn)nN be a sequence of real random variables. We say that (xn) converges almost-completely (a.co.) toward zero if, and only if, for all 𝜖 > 0, \({\sum }_{n=1}^{\infty } \mathbb {P}(\mid x_{n} \mid >\epsilon ) < \infty \). Moreover, we say that the rate of the almost-complete convergence of (xn) to zero is of order un (with n → 0) and we write xn = Oa.co.(un) if, and only if, there exists 𝜖 > 0 such that \({\sum }_{n=1}^{\infty }\mathbb {P}(\mid x_{n}\mid >\epsilon u_{n}) < \infty \). This kind of convergence implies both the almost-sure convergence and the convergence in probability.

References

  • Altendji, B., Demongeot, J., Laksaci, A and Rachdi, M (2018). Functional data analysis: estimation of the relative error in functional regression under random left-truncation model. Journal of Nonparametric Statistics 30, 472–490.

    Article  MATH  Google Scholar 

  • Ayad, S., Laksaci, A., Rahmani, S. and Rouane, R (2020). On the local linear modelization of the conditional density for functional and ergodic data. METRON 78, 237–254.

    Article  MATH  Google Scholar 

  • Al-Awadhi, F., Kaid, A., Laksaci, Z., Ouassou, I. and Rachdi, M. (2019). Functional data analysis: local linear estimation of the L1,-conditional quantiles. Statistical Methods and Applications 28, 217–240.

    Article  MATH  Google Scholar 

  • Baìllo, A. (2009). Local linear regression for functional predictor and scalar response. J. of Multivariate Anal 100, 102–111.

    Article  MATH  Google Scholar 

  • Barrientos, J., Ferraty, F. and Vieu, P (2010). Locally Modelled Regression Functional Data. J. Nonparametr. Statist 22, 617–632.

    Article  MATH  Google Scholar 

  • Beran, R (1981). Nonparametric Regression with Randomly Censored Survival Data, Technical report, University of California, Berkeley.

  • Berlinet, A., Elamine, A. and Mas, A (2011). Local linear regression for functional data. Inst. Statist. Math. 63, 1047–1075.

    Article  MATH  Google Scholar 

  • Bouanani, O., Laksaci, A., Rachdi, M. and Rahmani, S (2019). Asymptotic normality of some conditional nonparametric functional parameters in high-dimensional statistics. Behaviormetrika 46, 199–233.

    Article  Google Scholar 

  • Demongeot, J., Laksaci, A., Madani, F. and Rachdi, M (2013). Functional data: local linear estimation of the conditional density and its application. Statistics47, 26–44.

    Article  MATH  Google Scholar 

  • Demongeot, J., Laksaci, A., Rachdi, M. and Rahmani, S (2014). On the local linear modelization of the conditional distribution for functional data. Sankhya 76, 328–355.

    Article  MATH  Google Scholar 

  • Demongeot, J., Laksaci, A., Naceri, A. and Rachdi, M. (2017). Local Linear RegressionModelization When All Variables are Curves. Statistics and Probability Letters 121, 37–44.

    Article  MATH  Google Scholar 

  • Fan, J. and Gijbels, I. (1994). Censored regression: local linear approximations and their applications. Journal of the American Statistical Association 89, 560–570.

    Article  MATH  Google Scholar 

  • Ferraty, F., Laksaci, A. and Vieu, P. (2006a). Estimation of some characteristics of the conditional distribution in nonparametric functional models. Statist. Inference Stoch. Process. 9, 47–76.

    Article  MATH  Google Scholar 

  • Ferraty, F. and Vieu, P. (2006b). Nonparametric functional data analysis: Theory and Practice. Springer Series in Statistics, New York.

    MATH  Google Scholar 

  • Ferraty, F., Mas, A. and Vieu, P (2007). Nonparametric regression on functional data: inference and practical aspects. Australian New Zealand J. Statistics49, 267–286.

    Article  MATH  Google Scholar 

  • Helal, N. and Ould-Saïd, E. (2016). Kernel conditional quantile estimator under left truncation for functional regressors. Opuscula Mathematica 36, 25–48.

    Article  MATH  Google Scholar 

  • Horrigue, W. and Ould-Saïd, E. (2011). Strong uniform consistency of a nonparametric estimator of a conditional quantile for censored dependent data and functional regressors. Random Operators and Stochastic Equations 19, 131–156.

    Article  MATH  Google Scholar 

  • Horrigue, W. and Ould-Saïd, E. (2014). Nonparametric regression quantile estimation for dependant functional data under random censorship: Asymptotic normality. Communications in Statistics – Theory and Methods 44, 4307–4332.

    Article  MATH  Google Scholar 

  • Kaplan, E. M. and Meier, P (1958). Nonparametric estimation from incomplete observations. Journal of the American Statistical Association 53, 457–481.

    Article  MATH  Google Scholar 

  • Laksaci, A., Rachdi, M. and Rahmani, S. (2013). Spatial modelization: local linear estimation of the conditional distribution for functional data. Spat. Statist. 6, 1–23.

    Article  Google Scholar 

  • Leulmi, S (2019). Local linear estimation of the conditional quantile for censored data and functional regressors. Communications in Statistics-Theory and Methods, 1–15.

  • Lipsitz, S. R. and Ibrahim, J. G. (2000). Estimation with Correlated Censored Survival Data with Missing Covariates. Biostatistics 1, 315–27.

    Article  MATH  Google Scholar 

  • Ling, N., Liu, Y. and Vieu, P (2015). Nonparametric regression estimation for functional stationary ergodic data with missing at random. Journal of Statistical Planning and Inference 162, 75–87.

    Article  MATH  Google Scholar 

  • Ling, N., Liu, Y. and Vieu, P (2016). Conditional mode estimation for functional stationary ergodic data with responses missing at random. Statistics 50, 991–1013.

    Article  MATH  Google Scholar 

  • Li, W. V. and Shao, Q. M. (2001). Gaussian processes: inequalities, small ball probabilities and applications. Handbook of Statistics 19, 533–597.

    Article  MATH  Google Scholar 

  • Elias Ould, S. and Sadki, Q. (2011). Asymptotic normality for a smooth kernel estimator of the conditional quantile for censored time series. South African Statistical Journal 45, 65–98.

    MATH  Google Scholar 

  • Kohler, M., Kinga, M. and Márta, P. (2002). Prediction from randomly right censored data. Journal of Multivariate Analysis 80, 73–100.

    Article  MATH  Google Scholar 

  • Kohler, M., Màthè, K. and Pintèr, M. (2002). Prediction from randomly right censored data. Journal of Multivariate Analysis 80, 73–100.

    Article  MATH  Google Scholar 

  • Ren, J. J. and Gu, M (1997). Regression M-estimators with doubly censored data. Ann. Statist 25, 2638–2664.

    Article  MATH  Google Scholar 

  • Stute, W (1993). Consistent estimation under random censorship when covariables are present. J. Multivariate Anal. 45, 89–103.

    Article  MATH  Google Scholar 

  • Zhou, Z. Y. and Lin, Z. (2016). Asymptotic normality of locally modelled regression estimator for functional data. Nonparametric Stat. 28, 116–131.

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the Editor and the two anonymous reviewers for their valuable comments and suggestions which improved substantially the quality of an earlier version of this paper. Moreover, authors are very grateful to the Laboratory of Stochastic Models, Statistics and Applications, University of Saida in Algeria for their administrative and technical support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Saâdia Rahmani.

Ethics declarations

Conflict of interests

The authors declare that there is no conflict of interest and declare that no funding was received to carry out this research.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Proof of Lemma 1

We mention that the proof of this lemma is very close to the proof of Theorem 2.1 of Bouanani et al. (2019). Specifically, we have:

$$ \begin{array}{@{}rcl@{}} {}{}{}{}\lefteqn{\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})}A_{n}(x,y)= \displaystyle\frac{\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})}}{n\mathbb{E}({\Delta}_{1}K_{1})} \sum\limits_{j=1}^{n} {\Delta}_{j}K_{j}\left( \delta_{j} [\overline{G} (T_{j})]^{-1} J_{j} - F^{x}(y)\right)-}\\ {}{}{}{}& &\mathbb{E} \left( \sum\limits_{j=1}^{n} {\Delta}_{j}K_{j}\left( \delta_{j} [\overline{G} (T_{j})]^{-1} J_{j}-F^{x}(y)\right) \right). \end{array} $$
(3.5)

Next, by using Eqs. 2.43.5 can be rewritten as follows:

$$ \begin{array}{@{}rcl@{}} &&\quad\lefteqn{ \sqrt{n\phi_{x}(h_{\scriptscriptstyle K})}(\widetilde{ F}^{x}_{N}(y)-\mathbb{E}(\widetilde{ F}^{x}_{N}(y))}\\ & &= \displaystyle\frac{1}{n\mathbb{E}({\beta_{1}^{2}}K_{1})}\displaystyle\sum\limits_{i=1}^{n} {\beta_{i}^{2}}K_{i} \displaystyle\frac{\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})}\mathbb{E}({\beta_{1}^{2}}K_{1})}{\mathbb{E}({\Delta}_{1}K_{1})} \displaystyle\sum\limits_{j=1}^{n} K_{j} \left( \delta_{j} [\overline{G} (T_{j})]^{-1} J_{j}-F^{x}(y)\right)\\ & &- \displaystyle\frac{1}{n\mathbb{E}(\beta_{1}K_{1})}\displaystyle\sum\limits_{i=1}^{n}\beta_{i}K_{i} \displaystyle\frac{\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})} \mathbb{E}(\beta_{1}K_{1}) }{\mathbb{E}({\Delta}_{1}K_{1})}\displaystyle\sum\limits_{j=1}^{n} \beta_{j}K_{j} \left( \delta_{j} [\overline{G} (T_{j})]^{-1} J_{j}-F^{x}(y)\right)\\ & & - \mathbb{E}\left( \displaystyle\frac{1}{n\mathbb{E}({\beta_{1}^{2}}K_{1})}\displaystyle\sum\limits_{i=1}^{n}{\beta_{i}^{2}}K_{i} \displaystyle\frac{\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})}\mathbb{E}({\beta_{1}^{2}}K_{1})}{\mathbb{E}({\Delta}_{1}K_{1})}\sum\limits_{j=1}^{n} K_{j}\left( \delta_{j} [\overline{G} (T_{j})]^{-1} J_{j}-F^{x}(y)\right)\right)\\ & & + \mathbb{E}\left( \frac{1}{n\mathbb{E}(\beta_{1}K_{1})}\displaystyle\sum\limits_{i=1}^{n} \beta_{i}K_{i} \displaystyle\frac{\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})}\mathbb{E}(\beta_{1}K_{1})}{\mathbb{E}({\Delta}_{1}K_{1})} \displaystyle\sum\limits_{j=1}^{n} \beta_{j}K_{j} \left( \delta_{j} [\overline{G} (T_{j})]^{-1} J_{j}-F^{x}(y)\right)\right). \end{array} $$

Let

$$ \begin{array}{@{}rcl@{}} {\Upsilon}_{n}&=&\underbrace{\left( \displaystyle\frac{1}{n\mathbb{E}({\beta_{1}^{2}}K_{1})}\displaystyle\sum\limits_{i=1}^{n}{\beta_{i}^{2}}K_{i}-1 \right)\displaystyle\frac{\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})} \mathbb{E}({\beta_{1}^{2}}K_{1})}{\mathbb{E}({\Delta}_{1}K_{1})}\displaystyle\sum\limits_{j=1}^{n} K_{j} \left( \delta_{j} [\overline{G} (T_{j})]^{-1} J_{j}-F^{x}(y)\right)}_{S_{1.n}}\\ &-&\mathbb{E}\left( S_{1.n} \right), \end{array} $$
$$ {\Theta}_{n}=\underbrace{\displaystyle\frac{\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})} \mathbb{E}({\beta_{1}^{2}}K_{1})}{\mathbb{E}({\Delta}_{1}K_{1})}\displaystyle\sum\limits_{j=1}^{n} K_{j}\left( \delta_{j} [\overline{G} (T_{j})]^{-1} J_{j}-F^{x}(y)\right)}_{S_{2.n}}-\mathbb{E}\left( S_{2.n}\right),$$

and

$$ \begin{array}{@{}rcl@{}} D_{n} &=& \underbrace{\displaystyle\frac{1}{n\mathbb{E}(\beta_{1}K_{1})}\displaystyle\sum\limits_{i=1}^{n}\beta_{i}K_{i} \displaystyle\frac{\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})} \mathbb{E}(\beta_{1}K_{1}) }{\mathbb{E}({\Delta}_{1}K_{1})}\displaystyle\sum\limits_{j=1}^{n} \beta_{j}K_{j} \left( \delta_{j} [\overline{G} (T_{j})]^{-1} J_{j}-F^{x}(y)\right)}_{S_{3.n}}\\ &-&\mathbb{E}\left( S_{3.n} \right). \end{array} $$

Finally, the decomposition (3.5) becomes:

$$\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})} A_{n}(x,y)= {\Upsilon}_{n}+ {\Theta}_{n}-D_{n}.$$

So, it suffices to apply Slutsky’s Theorem to prove the asymptotic normality of Eq. 3.5. This last is reached by proving the following asymptotic results:

$$ {\Theta}_{n}\xrightarrow{\enskip d\enskip}\mathcal{N}(0, V_{H K}(x,y)). $$
(3.6)
$$ {\Upsilon}_{n}\xrightarrow{\enskip P\enskip}0. $$
(3.7)
$$ D_{n}\xrightarrow{\enskip P\enskip}0. $$
(3.8)

1.2 Proof of Lemma 3

Proof of (i)

Using the fact that \( \delta _{1} T_{1} \overline {G}^{-1}(T_{1}) \leq \tau _{L} \overline {G}^{-1}(\tau _{L}) \) (see, Kohler et al. (2002) for more details) and by assumptions ((H.2)-(i)), (H.3) and (H.5) we have:

$$ \begin{array}{@{}rcl@{}} h^{-l}_{K}\delta_{1} [ \overline{G}(T_{1})]^{-1} {K^{s}_{1}} \mid\beta_{1}\mid^{l}&\!\!\!\! \leq\!\!\!\!& h^{-l}_{K} \overline{G}^{-1}(T_{F}) {K^{s}_{1}} \mid\rho(X_{1},x)\mid^{l}, \\ &\!\!\!\! \leq\!\!\!\!& C h^{-l}_{K} \overline{G}^{-1}(T_{F})\!\! \mid\rho(X_{1},x)\mid^{l} \displaystyle{1\!\!1_{[-1,1]}(h^{-1}_{K} \rho(X_{1},x))},\\ &\!\!\!\! =\!\!\!\!&C^{\prime}\displaystyle{1\!\!1_{[-1,1]}(h^{-1}_{K} \rho(X_{1},x))}. \end{array} $$

Therefore, we obtain:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}\left( \delta_{1} [ \overline{G}(T_{1})]^{-1} {K^{s}_{1}}\mid\beta_{1}\mid^{l}\right)& \leq& C^{\prime} {h^{l}_{K}} \mathbb{E}\left( \displaystyle{1\!\!1_{[-1,1]}(h^{-1}_{K} \rho(X_{1},x))}\right), \\ & \leq& C^{\prime} {h^{l}_{K}} \phi_{x}(h_{K}). \end{array} $$

Proof of (ii)

By applying Hypothesis (H.5) and using the fact that \(\displaystyle {1\!\!1_{Y_{1}\leq C_{1}}} \varphi \) \((T_{1})=\displaystyle {1\!\!1_{Y_{1}\leq C_{1}}} \varphi (Y_{1})\), where φ(.) is a measurable function, we get:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}\left( \delta_{1} T_{1} [ \overline{G}(T_{1})]^{-1} \mid X_{1}\right)&=&\mathbb{E}\left( \displaystyle{1\!\!1_{Y_{1}\leq C_{1}}} Y_{1} [\overline{G}(Y_{1})]^{-1} \mid X_{1}\right),\\ &=&\mathbb{E}\left( Y_{1} \overline{G}^{-1}(Y_{1}) \mathbb{E}\left( \displaystyle{1\!\!1_{Y_{1}\leq C_{1}}}\mid(X_{1},Y_{1})\mid\right)\mid X_{1}\right),\\ &=&\mathbb{E}\left( Y_{1}\mid X_{1}\right). \end{array} $$

Proof of (iii)

From the previous relationship (ii) and by assumption (H.5), we have:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}\!\left( \displaystyle\!\!\frac{1}{n}\displaystyle\!\sum\limits_{i=1}^{n} \delta_{i} [\overline{G}(T_{i})]^{-1} U\!(\!X_{i},T_{i})\!\!\right)\!\!&\! = \!& \mathbb{E}\left( \displaystyle{1\!\!1_{Y_{1}\leq C_{1}}} [\overline{G}(Y_{1}]^{-1}) U(X_{1},Y_{1})\right),\\ &\! = \!&\mathbb{E}\left( [\overline{G}(Y_{1})]^{-1} \mathbb{E}\left( \displaystyle{1\!\!1_{Y_{1}\leq C_{1}}}\!\mid\!\!(\!X_{1},Y_{1})\!\right) \!U\!(X_{1},Y_{1})\!\right),\\ &\! = \!&\mathbb{E}(U(X_{1},Y_{1})). \end{array} $$

Proof of (iv)

This assertion is obtained by a straightforward application of the Glivenko-Cantelli Theorem for the censored case (see, for more details Kohler et al., 2002).

1.3 Proof of Lemma 4

By the definition of conditional variance we have:

$$ \begin{array}{@{}rcl@{}} Var \!\!\left( \!\!\delta_{1} [ \overline{G}(T_{1})]^{-1} J\!\!\left( \displaystyle\frac{y - T_{1}}{h_{J}}\!\right)\!\mid X_{1}\!\right)\!\!\!\!\! &=&\!\!\! \mathbb{E}\!\left( \left( \delta_{1} ([ \overline{G}(T_{1})]^{-1}) J\left( \displaystyle\frac{y-T_{1}}{h_{J}}\right)\right)^{2}\mid X_{1}\right) \\ \!\!&-&\!\!\! \mathbb{E}\left[\!\left( \delta_{1} [ \overline{G}(T_{1})]^{-1} J\!\left( \displaystyle\frac{y - T_{1}}{h_{J}}\!\right)\!\mid X_{1}\right)\right]^{2}. \\ \end{array} $$

For the second term of the right hand side of Eq. 3.9, by using our technical Lemma 3, we have:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}\left( \!\delta_{1} [ \overline{G}(T_{1})]^{-1} J\!\left( \displaystyle\frac{y - T_{1}}{h_{J}}\!\right)\!\mid X_{1}\!\right)\!\!\!\!\!&=\mathbb{E}\left( \displaystyle{1\!\!1_{Y_{1}\leq C_{1}}} [\overline{G}(Y_{1})]^{-1} J\left( \displaystyle\frac{y - Y_{1}}{h_{J}}\right)\mid X_{1}\!\right)\\ &= \mathbb{E}\left( J\left( \displaystyle\frac{y-Y_{1}}{h_{J}}\right)\mid X_{1}\right). \end{array} $$
(3.9)

Then, using an integration par parts and a change of variables together with the notation (2.5) allow to get:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}\left( \!J\!\left( \displaystyle\frac{y - Y_{1}}{h_{J}}\right)\!\mid X_{1}\!\right)\!\!\!\! &=& \displaystyle{\int}_{R} J(h_{\scriptscriptstyle J}^{-1}(y-z))f^{X_{1}}(z) dz. \\ \mathbb{E}(J_{1}\mid X_{1}) &=& \displaystyle{\int}_{R} J^{\prime}(t)F^{X_{1}}(y-h_{\scriptscriptstyle J}t) dt.\\ &=& \displaystyle{\int}_{R} J^{\prime}(t)\left( F^{X_{1}}(y-h_{\scriptscriptstyle J}t)-F^{x}(y)\right)+ \displaystyle{\int}_{R} J^{\prime}(t)F^{x}(y)dt\\ &=& F^{x}(y). \end{array} $$

This last result is obtained by using assumption ((H.1)-(iii)) and ((H.3)-(ii)).

On the other hand, the first term on the right hand side of Eq. 3.9, can be written as follows:

$$ \begin{array}{@{}rcl@{}} \lefteqn{\mathbb{E}\left( \left( \delta_{1} ([ \overline{G}(T_{1})]^{-1}) J\left( \displaystyle\frac{y-T_{1}}{h_{J}}\right)\right)^{2}\mid X_{1}\right)=}\\ & & \mathbb{E}\left( \delta_{1} ([ \overline{G}(T_{1})]^{-1})^{2} \left( J\left( \displaystyle\frac{y-T_{1}}{h_{J}}\right)\right)^{2}\mid X_{1}\right),\\ & & =\mathbb{E}\left( ([\overline{G}(Y_{1})]^{-1})^{2} J\left( \left( \displaystyle\frac{y-Y_{1}}{h_{J}}\right)\right)^{2}\mathbb{E}\left( \displaystyle{1\!\!1_{Y_{1}\leq C_{1}}}\mid(X_{1},Y_{1})\right)\mid X_{1}\right),\\ & &=\mathbb{E}\left( [\overline{G}(Y_{1})]^{-1} J\left( \left( \displaystyle\frac{y-Y_{1}}{h_{J}}\right)\right)^{2}\mid X_{1}\right),\\ & &=\int J\left( \left( \displaystyle\frac{y-z}{h_{J}}\right)\right)^{2} [\overline{G}(z)]^{-1}dF^{X_{1}}(z). \end{array} $$

Again, by applying the same change of variables and using a Taylor expansion of order one of \(\overline {G}(y-th_{J})\), we obtain:

$$ \begin{array}{@{}rcl@{}} \int \!J\!\!\left( \!\!\left( \displaystyle\frac{y - z}{h_{J}}\right)\!\!\right)^{\!\!2} \!\![\overline{G}(z)]^{-\!1}dF^{X_{1\!}}(\!z\!) \!\!\!\!\!&=&\!\!\!\! \int J^{2}(t) [\overline{G}(y-th_{J})]^{-1}dF^{X_{1}}(y-th_{J}), \\ &=&\!\!\!\! \int J^{2}(t) [\overline{G}(y)]^{-1}dF^{X_{1}}(y-th_{J}),\\ &+&\!\!\!\! \frac{h_{J} }{\overline{G}^{2}(y)} \!\int \!J^{2}(t) \!y \overline{G}^{\prime\!}(y^{\ast})dF^{X_{1\!}}(y - th_{J}) + o(h_{J}),\\ &=&\!\!\! \omega_{1}+\omega_{2}, \end{array} $$

where y is between y and ythJ.

Now, under assumption ((H.3)-(ii)) and ((H.5)-(ii)), we have:

$$\omega_{2}\leq C \frac{{h_{J}^{2}} }{\overline{G}^{2}(\tau_{L})} \int J^{2}(t) y f^{X_{1}}(y-th_{J})dt+o(1)=O({h^{2}_{J}}).$$

Concerning the term w1, with integration by parts we get:

$$ \begin{array}{@{}rcl@{}} w_{1}&=& {\int}_{\mathbb{R}}[\overline{G}(y)]^{-1}J^{2}(t)dF^{X_{1}}(y-th_{J}),\\ &=& [\overline{G}(y)]^{-1} {\int}_{\mathbb{R}}2 J(t)^{\prime}J(t)\left( F^{X_{1}}(y-th_{J})\right)-F^{x}(y) dt,\\ && +[\overline{G}(y)]^{-1} {\int}_{\mathbb{R}}2 J(t)^{\prime}{J(t)}F^{x}(y)dt. \end{array} $$

By the continuity of Fx and remarking that \( {\int \limits }_{\mathbb {R}}2 J(t)^{\prime }{J(t)}F^{x}(y)dt=F^{x}(y)\), we deduce that: \(w_{1}=\displaystyle \frac {F^{x}(y)}{\overline {G}(y)}\).

Proof of Eq. 3.6

In order to apply the Lindeberg central limit Theorem, we need to compute the asymptotic term Θn. For this we have:

$$ \begin{array}{@{}rcl@{}} \lefteqn{ Var({\Theta}_{n})=\displaystyle\frac{n^{2}\phi_{x}(h_{\scriptscriptstyle K})\mathbb{E}^{2}({\beta^{2}_{1}}K_{1})}{\mathbb{E}^{2}({\Delta}_{1}K_{1})} \left( \mathbb{E}\left( {K^{2}_{1}}\left( \delta_{1} [\overline{G}(Y_{1})]^{-1} J_{1}-F^{x}(y)\right)^{2}\right)\right)}\\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!& &\quad\quad\quad~ -\displaystyle\frac{n^{2}\phi_{x}(h_{\scriptscriptstyle K})\mathbb{E}^{2}({\beta^{2}_{1}}K_{1})}{\mathbb{E}^{2}({\Delta}_{1}K_{1})} \mathbb{E}^{2}\left( K_{1}\!\left( \delta_{1} [\overline{G}(Y_{1})]^{-1} J_{1} - F^{x}(y)\right)\right). \end{array} $$
(3.10)

Concerning the second term on the right hand side of Eq. 3.10, we have:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}^{2}(K_{1}(\delta_{1} [\overline{G}(Y_{1})]^{-1} J_{1} - F^{x}(y)))\!=\mathbb{E}^{2}\left( K_{1}(\mathbb{E}(\delta_{1} [\overline{G}(Y_{1})]^{-1} J_{1} \mid X_{1}) - F^{x}(y))\right). \end{array} $$

Moreover, by using equation of Eq. 3.9, we get:

$$ \mathbb{E}(\delta_{1} [\overline{G}(Y_{1})]^{-1} J_{1} \mid X_{1})-F^{x}(y)\longrightarrow0\quad\text{as}\quad n\longrightarrow \infty. $$
(3.11)

Now, for the first term on the right hand side of Eq. 3.10, we have:

$$ \begin{array}{@{}rcl@{}} &&\quad\lefteqn{\frac{n^{2}\phi_{x}(h_{\scriptscriptstyle K}) \mathbb{E}^{2}({\beta_{1}^{2}}K_{1})}{\mathbb{E}^{2}({\Delta}_{1}K_{1})}\mathbb{E}\left( (\delta_{1} [\overline{G}(Y_{1})]^{-1} J_{1}-F^{x}(y))^{2}{K_{1}^{2}}\right)}\\ && =\frac{n^{2}\phi_{x}(h)\ \mathbb{E}^{2}({\beta_{1}^{2}}K_{1})}{ \mathbb{E}^{2}({\Delta}_{1}K_{1})} \mathbb{E}\left( \mathbb{E}((\delta_{1} [\overline{G}(Y_{1})]^{-1} J_{1}-F^{x}(y))^{2}\mid X_{1}) {K_{1}^{2}}\right),\\ & & =\frac{n^{2}\phi_{x}(h_{\scriptscriptstyle K})\mathbb{E}^{2}({\beta_{1}^{2}}K_{1})}{\mathbb{E}^{2}({\Delta}_{1}K_{1})}\mathbb{E}\left( Var(\delta_{1} [\overline{G}(Y_{1})]^{-1} J_{1}\mid X_{1}){K_{1}^{2}}\right),\\ & &\quad+\frac{n^{2}\phi_{x}(h_{\scriptscriptstyle K})\mathbb{E}^{2}({\beta_{1}^{2}}K_{1})}{\mathbb{E}^{2}({\Delta}_{1}K_{1})} \mathbb{E}\left( (E(\delta_{1} [\overline{G}(Y_{1})]^{-1} J_{1}\mid X_{1})-F^{x}(y))^{2}{K_{1}^{2}}\right).\\ \end{array} $$
(3.12)

Combining Lemmas A.1 in Zhou and Lin (2016) and Eqs. 4 with Eq. 3.11, we get:

$$ \displaystyle\frac{n^{2}\phi_{x}(h_{\scriptscriptstyle K})\mathbb{E}^{2}({\beta_{1}^{2}}K_{1})}{\mathbb{E}^{2}({\Delta}_{1}K_{1})} \mathbb{E}\left( Var(\delta_{1} [\overline{G}(Y_{1})]^{-1} J_{1}\mid X_{1}){K_{1}^{2}}\right)\xrightarrow[n\to\infty]{} \displaystyle\frac{M_{2}}{{M^{2}_{1}}} F^{x}(y) \left( \displaystyle\frac{1}{\overline{G}(Y_{1})}-F^{x}(y)\right), $$
(3.13)

and

$$\displaystyle\frac{n^{2}\phi_{x}(h_{\scriptscriptstyle K})\mathbb{E}^{2}({\beta_{1}^{2}}K_{1})}{\mathbb{E}^{2}({\Delta}_{1}K_{1})} \mathbb{E}\left( (\mathbb{E}(\delta_{1} [\overline{G}Y_{1})]^{-1} ( J_{1}\mid X_{1})-F^{x}(y))^{2}{K_{1}^{2}}\right))\longrightarrow 0, \quad\text{as}\quad n\longrightarrow \infty. $$

To complete the proof of the claim (3.6), it suffices to use the Lindeberg’s central limit theorem on Rjn which satisfies the following condition:

$$ \begin{array}{@{}rcl@{}} &&\frac{1}{Var({\Theta}_{n})}\displaystyle\sum\limits_{j=1}^{n}\mathbb{E}\left( R_{jn}^{2}\displaystyle{1\!\!1_{{\mid R_{jn}\mid>\varepsilon \sqrt{Var({\Theta}_{n})}} }}\right)\\&=&\displaystyle\frac{1}{Var({\Theta}_{n})}\displaystyle\mathbb{E}\left( (\sqrt{n} R_{1n})^{2}\displaystyle{1\!\!1_{{\mid \sqrt{n}R_{1n}\mid>\varepsilon \sqrt{n Var({\Theta}_{n})}} }}\right) \!\to\! 0, \text{ for all } \varepsilon>0, \end{array} $$

where

$$ R_{1n}=\displaystyle\frac{\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})} \mathbb{E}({\beta_{1}^{2}}K_{1})}{\mathbb{E}({\Delta}_{1}K_{1})}\left( K_{1}\left( \delta_{1} [\overline{G} (T_{1})]^{-1} J_{1}-F^{x}(y)\right)\right)-\mu_{n}.$$

μn is the mean of the first term on the right side.

On the other hand, we have:

$$\mathbb{E}\left( (\sqrt{n} R_{1n})^{2}\right)=Var({\Theta}_{n})\longrightarrow V_{J K}(x,y) \text{ as } n\longrightarrow \infty. $$

From the technical lemma A.1 of Zhou and Lin (2016) and by using the fact that \( \mid J_{1} \delta _{1} [\overline {G}(Y_{1})]^{-1} -F^{x}(y)\mid \leq \overline {G}^{-1} (\tau _{L})+1\), (see Ould Saïd et al. 2011) and under assumption (H.3)(i), (H.3)(ii) and (H.4)(iii), we obtain

$$ \begin{array}{@{}rcl@{}} \left( \displaystyle\frac{n}{ n Var({\Theta}_{n})}\right)^{\frac{1}{2}}\mid R_{1n}\mid\!\!\! &\leq & \!\!\!C \displaystyle \!\left( \!\! \frac{ n}{(n - 1)^{2} \phi_{x}(h_{K}) Var({\Theta}_{n})}\!\!\right)^{\frac{1}{2}} \!\longrightarrow\! 0,\! \text{ as } n\!\longrightarrow\! \infty. \end{array} $$

Therefore, for n large enough, we deduce that \(\left \lbrace \sqrt {n}\mid R_{1n}\mid >\epsilon \sqrt {n Var({\Theta }_{n})}\right \rbrace \) is an empty set. The proof of the claim Eq. 3.6 is therefore complete.

proof of Eq. 3.7

By using Cauchy-Schwarz’s inequality, we obtain:

$$ \begin{array}{@{}rcl@{}} \!\!\!\!\!\!\lefteqn{\mathbb{E}\mid(S_{1.n} - \mathbb{E}(S_{1.n})\mid\leq 2 \sqrt{\mathbb{E} \left( \displaystyle\frac{1}{n\mathbb{E}({\beta_{1}^{2}}K_{1})}\displaystyle\sum\limits_{i=1}^{n}{\beta_{i}^{2}}K_{i}-1 \right)^{2} }\times }\\ \!\!\!\!\!\!\!\!& & \sqrt{\mathbb{\!E}\!\left( \underbrace{\displaystyle\!\frac{\sqrt{n\phi_{x}(h_{\scriptscriptstyle K})} \mathbb{E}({\beta_{1}^{2}}K_{1})}{\mathbb{E}({\Delta}_{1}K_{1})}\displaystyle\sum\limits_{j=1}^{n} K_{j} \left( \delta_{j} [\overline{G} (T_{j})]^{-1} J_{j} - F^{x}(y)\right)}_{L_{n}}\!\right)^{2}}. \end{array} $$
(3.14)

Again, by applying the technical lemma A.1 of Zhou and Lin (2016) for the first term on the right-hand side of Eq. 3.14, we get:

$$ \begin{array}{@{}rcl@{}} \mathbb{E} \left( \frac{1}{n\mathbb{E}({\beta_{1}^{2}}K_{1})}\sum\limits_{i=1}^{n}{\beta_{i}^{2}}K_{i}-1 \right)^{2} &=& Var \left( \frac{1}{n\mathbb{E}({\beta_{1}^{2}}K_{1})}\sum\limits_{i=1}^{n}{\beta_{i}^{2}}K_{i} \right),\\ &=&\frac{1}{n^{2}\mathbb{E}^{2}({\beta_{1}^{2}}K_{1})} n Var({\beta_{1}^{2}}K_{1}),\\ &\leq & \frac{1}{n(O(h^{4}{\phi^{2}_{x}}(h_{K})))}\mathbb{E}({\beta_{1}^{4}}{K_{1}^{2}}), \\ &\leq & \left( \frac{1}{n\phi_{x}(h_{K})}\right). \end{array} $$
(3.15)

Concerning the second term on the right-hand side of Eq. 3.14, we have:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}({L_{n}^{2}})&=&\displaystyle\frac{ n\phi_{x}(h_{\scriptscriptstyle K})\mathbb{E}^{2}({\beta_{1}^{2}}K_{1})}{\mathbb{E}^{2}({\Delta}_{1}K_{1})}\left( n\mathbb{E}(K_{1} (\delta_{1} [\overline{G} (T_{1})]^{-1} J_{1}-F^{x}(y)))^{2}\right) \\ &+&\displaystyle\frac{ n\phi_{x}(h_{\scriptscriptstyle K})\mathbb{E}^{2}({\beta_{1}^{2}}K_{1})}{\mathbb{E}^{2}({\Delta}_{1}K_{1})} \left( n(n-1)\mathbb{E}^{2}(K_{1} (\delta_{1} [\overline{G} (T_{1})]^{-1} J_{1}-F^{x}(y)))\right). \\ \end{array} $$

By using the fact that \( \mid J_{i} \delta _{j} [\overline {G}Y_{i})]^{-1} -F^{x}(y)\mid \leq [\overline {G}(\tau _{L})]^{-1} +1\), (see, Ould Saïd et al. 2011) and by the application of the same technical lemma A.1 of Zhou and Lin (2016), we obtain under assumption (H3) and Eq. 3.11:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}({L_{n}^{2}})&=&\displaystyle\frac{n \phi_{x}(h_{K})O({h_{K}^{2}}{\phi_{x}^{2}}(h_{K})}{(n-1)^{2,}O({h_{K}^{4}}{\phi_{x}^{4}}(h_{K})}\left( nO(\phi_{x}(h_{K})) +n(n-1)o({\phi_{x}^{2}}(h_{K})) \right)\\ &=& O(1)+o(n\phi_{x}(h_{K})).\\ \end{array} $$
(3.16)

In addition, by combining (3.15) and (3.16), we deduce that

$$\mathbb{E}\mid(S_{1.n}-\mathbb{E}(S_{1.n})\mid=o(1).$$

Finally to obtain the convergence in probability it suffices to use Bienaymé-Tchebychev’s inequality. Indeed, we obtain for all ε > 0

$$\mathbb{P}(\mid S_{1.n}-\mathbb{E}(S_{1.n})\mid >\varepsilon) \leqslant\frac{\mathbb{E}(\mid(S_{1.n}-\mathbb{E}(S_{1.n})\mid) }{\varepsilon}\xrightarrow{\enskip \enskip}0.$$

Proof of Eq. 3.8

By following the same idea as in the proof of claim (3.7), we have

$$\mathbb{E}\mid(S_{3.n}-\mathbb{E}(S_{3.n})\mid=o(1).$$

In addition, by applying Bienaymé-Tchebychev’s inequality, we deduced that:

$$ S_{3.n}-\mathbb{E}(S_{3.n})\longrightarrow0, as n\longrightarrow\infty.$$

1.4 Proof of Lemma 2

To prove the convergence in probability of Rn(x, y) to 0, it suffices to show the two following expressions:

$$ \mathbb{P}(\mid \widehat{F}^{x}_{N}(y)-\widetilde{F}^{x}_{N}(y)\mid>\varepsilon)\leq\displaystyle\frac{\mathbb{E}(\mid \widehat{F}^{x}_{N}(y)-\widetilde{F}^{x}_{N}(y)\mid)}{\varepsilon}, $$
(3.17)
$$ B_{n}(x,y)=\mathbb{E}(\widetilde{F}^{x}_{N}(y))- F^{x}(y)\longrightarrow0, as n\longrightarrow \infty. $$
(3.18)

By the definition of \( \widehat {F}^{x}_{N}(y)\) and \(\widetilde {F}^{x}_{N}(y)\), we obtain:

$$ \begin{array}{@{}rcl@{}} \mid \widehat{F}^{x}_{N}(y)-\widetilde{F}^{x}_{N}(y)\mid\!\!\! &\leq&\!\!\!\! \displaystyle \frac{1}{n(n - 1)\mathbb{E}(W_{12})}\displaystyle \sum\limits_{i \neq j} \mid J_{i} \delta_{j} W_{ij} \left( [\overline{G}_{n}(T_{i})]^{-1} - [\overline{G}(T_{i})]^{-1} \right)\! \mid \\ &\leq&\!\!\!\! \displaystyle \frac{ C \sup\limits_{{t\leq \tau_{L}}}\mid \overline{G}_{n} (t)- \overline{G}(t) \mid}{ \overline{G}_{n} (\tau_{L}) \overline{G} (\tau_{L})} \mid\widehat{F}^{x}_{D}\mid. \end{array} $$

On the other side, using Lemma 3 in Leulmi (2019), we have: \(\mid \widehat {F}^{x}_{D}\mid \leq \displaystyle \frac {log(n)}{n\phi _{x}(h_{K})}+1\) and under our technical Lemma 3 and by assumption (H.4)(ii) we get:

$$ \widehat{F}^{x}_{N}(y)-\widetilde{F}^{x}_{N}(y) \longrightarrow 0, as n\longrightarrow \infty.$$

Proof of Eq. 3.18

By the definition of \(\widetilde { F}^{x}_{N}(y)\), we have:

$$ \begin{array}{@{}rcl@{}} \mathbb{E}(\widetilde{ F}^{x}_{N}(y))-F^{x}(y)&=&\displaystyle\frac{1}{\mathbb{E}({\Delta}_{1}K_{1})} \mathbb{E}\left( {\Delta}_{1}K_{1}\delta_{1} J_{1} [\overline{G}(T_{1})]^{-1} J_{1}\right)-F^{x}(y)\\ &=&\displaystyle\frac{1}{\mathbb{E}({\Delta}_{1}K_{1})} \mathbb{E}\left( {\Delta}_{1}K_{1}\mathbb{E}\left( \delta_{1} [\overline{G}(T_{1})]^{-1} J_{1}\mid X_{1}\right)\right) - F^{x}(y), \end{array} $$

where

$$ \begin{array}{@{}rcl@{}} \mathbb{E}\left( \delta_{1} [\overline{G}(T_{1})]^{-1} J_{1}\mid X_{1}\right) &=&\mathbb{E}\left( J_{1}[\overline{G}(Y_{1})]^{-1} \mathbb{E}\left( \displaystyle{1\!\!1_{Y_{1}\leq C_{1}}}\mid(X_{1},Y_{1})\mid \right)\mid X_{1}\right)\\ &=&\mathbb{E}\left( J(h^{-1}_{J}(y-Y_{1}))\mid X_{1}\right). \end{array} $$

Finally, by using Lemma 3.2 in Demongeot et al. (2014) and by assumption (H.4)(ii), we obtain

$$ B_{n}(x,y)\longrightarrow0, as n\longrightarrow\infty. $$

Conclusion

The theoretical study of this paper aims mainly at investigating relationships between scalar censored and functional variables throuth a local linear estimation of the CDF. The application part was devoted to a study of its performance on a finite sample. This study demonstrates more efficiency for the local linear method than the classical method in the presence of censored data. Accordingly, this paper displays three main statements. First, the focus has been on determining the leading terms, which are the bias and the asymptotic variance, and where results are obtained under standard hypotheses that allow a large flexibility in the selection of the parameters. Second, the proposed asymptotic study can be also considered as a preliminary investigation allowing various theoretical issues. Among them, optimal selection of the bi-functional operators β and δ, optimal choice of the smoothing parameters or the different types of censorship. Third, the estimation procedure used for this study can also be employed in other conditional models. Finally, this study contributes greatly in both theory and practice and which is assessed by the numerous open questions.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rahmani, S., Bouanani, O. Local linear estimation of the conditional cumulative distribution function: Censored functional data case. Sankhya A 85, 741–769 (2023). https://doi.org/10.1007/s13171-021-00276-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13171-021-00276-x

Keywords

PACS

Navigation