Skip to main content
Log in

Nearest neighbors estimation for long memory functional data

  • Original Paper
  • Published:
Statistical Methods & Applications Aims and scope Submit manuscript

Abstract

In this paper, we consider the asymptotic properties of the nearest neighbors estimation for long memory functional data. Under some regularity assumptions, we investigate the asymptotic normality and the uniform consistency of the nearest neighbors estimators for the nonparametric regression models when the explanatory variable and the errors are of long memory and the explanatory variable takes values in some abstract functional space. The finite sample performance of the proposed estimator is discussed through simulation studies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1

Similar content being viewed by others

References

  • Al-Awadhi FA, Kaid Z, Laksaci A, Ouassou I, Rachdi M (2018) Functional data analysis: local linear estimation of the \(L_1\)-conditional quantiles. Stat Methods Appl. https://doi.org/10.1007/s10260-018-00447-5

    Article  MATH  Google Scholar 

  • Benhenni K, Hedli-Griche S, Rachdi M, Vieu P (2008) Consistency of the regression estimator with functional data under long memory conditions. Statist Probab Lett 78:1043–1049

    Article  MathSciNet  Google Scholar 

  • Benhenni K, Hedli-Griche S, Rachdi M (2010) Estimation of the regression operator from functional fixed-design with correlated errors. J Multivar Anal 101:476–490

    Article  MathSciNet  Google Scholar 

  • Benhenni K, Hedli-Griche S, Rachdi M (2017) Regression models with correlated errors based on functional random design. TEST 26:1–21

    Article  MathSciNet  Google Scholar 

  • Beran J, Feng Y, Ghosh S, Kulik R (2013) Long-memory processes. Probabilistic properties and statistical methods. Springer, Hiedelberg

    Book  Google Scholar 

  • Beran J, Feng Y, Ghosh S (2015) Modelling long-range dependence and trends in duration series: an approach based on EFARIMA and ESEMIFAR models. Statist Pap 56:431–451

    Article  MathSciNet  Google Scholar 

  • Bertram P, Kruse R, Sibbertsen P (2013) Fractional integration versus level shifts: the case of realized asset correlations. Statist Pap 54:977–991

    Article  MathSciNet  Google Scholar 

  • Bongiorno EG, Salinelli E, Goia A, Vieu P (2014) Contributions in infinite-dimensional statistics and related topics. Società Editrice Esculapio, Bologna

    Book  Google Scholar 

  • Burba F, Ferraty F, Vieu P (2009) k-Nearest Neighbour method in functional nonparametric regression. J Nonparametr Stat 21:453–469

    Article  MathSciNet  Google Scholar 

  • Chagny G, Roche A (2016) Adaptive estimation in the functional nonparametric regression model. J Multivar Anal 146:105–118

    Article  MathSciNet  Google Scholar 

  • Chen J, Zhang L (2009) Asymptotic properties of nonparametric M-estimation for mixing functional data. J Statist Plan Inference 139:533–546

    Article  MathSciNet  Google Scholar 

  • Csörgő S, Mielniczuk J (1995) Nonparametric regression under long-range dependent normal errors. Ann Stat 23:1000–1014

    Article  MathSciNet  Google Scholar 

  • Csörgő M, Szyszkowicz B, Wang L (2005) Limit theorems for nearest-neighbor density estimation under long-range dependence. J Statist Res 39:121–138

    MathSciNet  Google Scholar 

  • Cuevas A (2014) A partial overview of the theory of statistics with functional data. J Statist Plan Inference 147:1–23

    Article  MathSciNet  Google Scholar 

  • Doukhan P, Lang G, Surgailis D (2002) Asymptotics of weighted empirical processes of linear fields with long-range dependence. Ann Inst H Poincaré Probab Statist 38:879–896

    Article  MathSciNet  Google Scholar 

  • Doukhan P, Oppenheim G, Taqqu MS (2003) Theory and applications of long-range dependence. Birkhäuser, Boston

    MATH  Google Scholar 

  • Ferraty F, Vieu P (2004) Nonparametric models for functional data, with application in regression, time-series prediction and curve discrimination. J Nonparametr Stat 16:111–125

    Article  MathSciNet  Google Scholar 

  • Ferraty F, Vieu P (2006) Nonparametric functional data analysis. Springer, New York

    MATH  Google Scholar 

  • Ferraty F, Vieu P (2008) Erratum of: ’non-parametric models for functional data, with application in regression, time-series prediction and curve discrimination’. J Nonparametr Stat 20:187–189

    Article  MathSciNet  Google Scholar 

  • Gasser T, Hall P, Presnell B (1998) Nonparametric estimation of the mode of a distribution of random curves. J R Stat Soc Ser B Stat Methodol 60:681–691

    Article  MathSciNet  Google Scholar 

  • Giraitis L, Koul HL, Surgailis D (2012) Large sample inference for long memory processes. Imperial College Press, London

    Book  Google Scholar 

  • Goia A, Vieu P (2016) An introduction to recent advances in high/infinite dimensional statistics. J Multivar Anal 146:1–6

    Article  MathSciNet  Google Scholar 

  • Guégan D (2005) How can we define the concept of long memory? An econometric survey. Econom Rev 24:113–149

    Article  MathSciNet  Google Scholar 

  • Guo H, Koul HL (2007) Nonparametric regression with heteroscedastic long memory errors. J Statist Plan Inference 137:379–404

    Article  MathSciNet  Google Scholar 

  • Hassler U, Scheithauer J (2011) Detecting changes from short to long memory. Statist Pap 52:847–870

    Article  MathSciNet  Google Scholar 

  • Horváth L, Kokoszka P (2012) Inference for functional data with applications, vol 200. Springer, Berlin

    Book  Google Scholar 

  • Hsing T, Eubank R (2015) Theoretical foundations of functional data analysis, with an introduction to linear operators. Wiley series in probability and statistics. Wiley, Chichester

    MATH  Google Scholar 

  • Kara-Zaitri L, Laksaci A, Rachdi M, Vieu P (2017a) Uniform in bandwidth consistency for various kernel estimators involving functional data. J Nonparametr Stat 29:85–107

    Article  MathSciNet  Google Scholar 

  • Kara-Zaitri L, Laksaci A, Rachdi M, Vieu P (2017b) Data-driven kNN estimation in nonparametric functional data analysis. J Multivar Anal 153:176–188

    Article  Google Scholar 

  • Kudraszowa NL, Vieu P (2013) Uniform consistency of kNN regressors for functional variables. Statist Probab Lett 83:1863–1870

    Article  MathSciNet  Google Scholar 

  • Lian H (2011) Convergence of functional k-nearest neighbor regression estimate with functional responses. Electron J Stat 5:31–40

    Article  MathSciNet  Google Scholar 

  • Masry E (2005) Nonparametric regression estimation for dependent functional data: asymptotic normality. Stoch Process Appl 115:155–177

    Article  MathSciNet  Google Scholar 

  • Masry E, Mielniczuk J (1999) Local linear regression estimation for time series with long-range dependence. Stoch Process Appl 82:173–193

    Article  MathSciNet  Google Scholar 

  • Messaci F, Nemouchi N, Ouassou I, Rachdi M (2015) Local polynomial modelling of the conditional quantile for functional data. Stat Methods Appl 24:597–622

    Article  MathSciNet  Google Scholar 

  • Robinson PM (2011) Asymptotic theory for nonparametric regression with spatial data. J Econom 165:5–19

    Article  MathSciNet  Google Scholar 

  • Taqqu MS (1975) Weak convergence to fractional brownian motion and to the rosenblatt process. Z. Wahrscheinlichkeitstheorie verw. Gebiete 31:287–302

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lihong Wang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by National Natural Science Foundation of China (NSFC) Grants 11671194 and 11501287.

Appendix

Appendix

Proof of Lemma 1

It suffices to show that, as \(n\rightarrow \infty \),

$$\begin{aligned} \text {E}[g_n(x,H_n)]\longrightarrow 0 \end{aligned}$$
(10)

and

$$\begin{aligned} \text {Var}[g_n(x,H_n)]\longrightarrow 0. \end{aligned}$$
(11)

Let \(\xi _{n1}=\min (H_n, h_n)\) and \(\xi _{n2}=\max (H_n, h_n)\). Then, by (8), \(\xi _{n1}=h_n(1+ o(n^{-\rho }))\), \(\xi _{n2}=h_n(1+ o(n^{-\rho }))\) and \(\xi _{n2}-\xi _{n1}=o(h_nn^{-\rho })\) a.s.

Since E\(\varepsilon _0=0\) and \(k_n=nh_nf_x(0)\), by Assumption (A4),

$$\begin{aligned} \text {E}[g_n(x, H_n)]= & {} \text {E}\big [n^{1/2-\beta }k_n^{-1}\sum _{i=1}^n(K(d(x,X_i)/H_n)-K(d(x,X_i)/h_n))(Y_i-r(x))\big ]\nonumber \\= & {} n^{3/2-\beta }k_n^{-1} \text {E}[(K(d(x,X_1)/H_n)-K(d(x,X_1)/h_n))(r(X_i)-r(x))]\nonumber \\\le & {} Cn^{3/2-\beta }k_n^{-1}\text {E}\Big [\int _{\xi _{n1}}^{\xi _{n2}}u dF_x(u)\Big ]\nonumber \\= & {} Cn^{3/2-\beta }k_n^{-1} \text {E}[(\xi _{n2}-\xi _{n1})\zeta _n f_x(\zeta _n)]\nonumber \\= & {} C n^{3/2-\beta }k_n^{-1}h_n o(n^{-\rho }) \text {E}[\zeta _n f_x(\zeta _n)]\nonumber \\= & {} o(n^{\alpha -\beta -1/2-\rho })\text {E}[f_x(\zeta _n)]/f^2_x(0), \end{aligned}$$
(12)

where \(\xi _{n1}<\zeta _n<\xi _{n2}\). Hence (12), together with Assumption (A2) and the fact that \(\zeta _n=h_n(1+o(1))\) a.s. implies (10).

Let \(Z_{i1}=(K(d(x,X_i)/H_n)-K(d(x,X_i)/h_n))(r(X_i)-r(x))\) and \(Z_{i2}=(K(d(x,X_i)/H_n)-K(d(x,X_i)/h_n))\varepsilon _i\). To prove (11), it is enough to show

$$\begin{aligned} \text {Var}\Big (n^{1/2-\beta -\alpha }\sum _{i=1}^n Z_{i1}\Big )\longrightarrow 0, \quad \text {and} \quad \text {Var}\Big (n^{1/2-\beta -\alpha }\sum _{i=1}^n Z_{i2}\Big )\longrightarrow 0. \end{aligned}$$

Note that

$$\begin{aligned} \text {Var}\Big (n^{1/2-\beta -\alpha }\sum _{i=1}^nZ_{i1}\Big )= & {} n^{2-2\beta -2\alpha }\text {Var}(Z_{01})+n^{1-2\beta -2\alpha }\sum _{i\ne j}\text {Cov}(Z_{i1},Z_{j1}). \end{aligned}$$

For the first term of the above variance, similarly to (12), we obtain

$$\begin{aligned} \text {Var}(Z_{01})\le & {} \text {E}\big [(K(d(x,X_0)/H_n)-K(d(x,X_0)/h_n))(r(X_0)-r(x))\big ]^2\\\le & {} C\text {E}\Big [\int _{\xi _{n1}}^{\xi _{n2}}u^2f_x(u)du\Big ]= C\text {E} [(\xi _{n2}-\xi _{n1})\zeta _n^2 f_x(\zeta _n)]\nonumber \\= & {} C h_n^3 o(n^{-\rho })\text {E}[f_x(\zeta _n)]. \end{aligned}$$

This means that

$$\begin{aligned} n^{2-2\beta -2\alpha }\text {Var}(Z_{01})=o(n^{\alpha -2\beta -1-\rho })=o(1). \end{aligned}$$

In addition, by mean value theorem, \(F_x(\xi _{n2})-F_x(\xi _{n1})=f_x(\zeta _n) (\xi _{n2}-\xi _{n1})\) for some \(\xi _{n1}< \zeta _n<\xi _{n2}\). Moreover, by Assumption (A5),

$$\begin{aligned}&P(u_1\le d(x, X_i)\le u_2, u_1\le d(x, X_j)\le u_2)\\&\quad =P(d(x, X_i)\le u_2, d(x, X_j)\le u_2)+P(d(x, X_i)\le u_1, d(x, X_j)\le u_1)\\&\qquad -\,P( d(x, X_i)\le u_2, d(x, X_j)\le u_1)-P( d(x, X_i)\le u_1, d(x, X_j)\le u_2)\\&\quad \le F_x^2(u_2)+F_x^2(u_1)-2F_x(u_2)F_x(u_1)+C\gamma _x(i-j)\\&\quad =\big (F_x(u_2)-F_x(u_1)\big )^2+C\gamma _x(i-j), \end{aligned}$$

for any \(u_1\), \(u_2\) close to 0. Thus, by (8) and Assumption (A4),

$$\begin{aligned} \text {Cov}(Z_{i1},Z_{j1})= & {} \text {Cov}\left[ (r(X_i)-r(x))(K(d(x,X_i)/H_n)-K(d(x,X_i)/h_n)),\right. \nonumber \\&\left. (r(X_j)-r(x))(K(d(x,X_j)/H_n)-K(d(x,X_j)/h_n))\right] \nonumber \\\le & {} C\text {E}\Big [d(x, X_i)d(x, X_j)\big |K(d(x,X_i)/H_n)-K(d(x,X_i)/h_n)\big |\nonumber \\&\cdot \,\big |K(d(x,X_j)/H_n)-K(d(x,X_j)/h_n)\big |\Big ]\nonumber \\&+\,C\Big (\text {E} \big [d(x, X_i)\big |K(d(x,X_i)/H_n)-K(d(x,X_i)/h_n)\big |\big ]\Big )^2\nonumber \\\le & {} C\text {E}\big [\xi _{n2}^2P(\xi _{n1}\le d(x, X_i)\le \xi _{n2}, \xi _{n1}\le d(x, X_j)\le \xi _{n2})\big ]\nonumber \\&+\,C\Big (\text {E}\big [\xi _{n2} P(\xi _{n1}\le d(x, X_0)\le \xi _{n2})\big ]\Big )^2\nonumber \\\le & {} C h_n^2(1+o(1))\Big \{\gamma _x(i-j)+\text {E}\big [F_x(\xi _{n2})-F_x(\xi _{n1})\big ]^2\Big \}\nonumber \\= & {} C h_n^2(1+o(1))\Big \{\gamma _x(i-j)+o(h_n^2 n^{-2\rho })\Big \}. \end{aligned}$$

Now using (7) and Assumption (A2), we arrive at, for some large enough N,

$$\begin{aligned} n^{1-2\beta -2\alpha }\sum _{i\ne j}\text {Cov}(Z_{i1},Z_{j1})= & {} Cn^{1-2\beta -2\alpha }h_n^2\Big \{\sum _{i\ne j}\gamma _x(i-j)+o(n^{2\alpha -2\rho })\Big \}\\= & {} O(n^{-2\beta -1})\Big \{n\sum _{k=1}^n\gamma _x(k)+o(n^{2\alpha -2\rho })\Big \}\\= & {} O(n^{-2\beta -1})\Big \{n\big (\sum _{k=1}^N+\sum _{k=N+1}^n\big )\gamma _x(k)+o(n^{2\alpha -2\rho })\Big \}\\\le & {} O(n^{-2\beta -1})\Big \{n\sum _{k=1}^n k^{-\tau _x D}+o(n^{2\alpha -2\rho })\Big \}\\= & {} O(n^{-2\beta -1})\Big \{n^{2-\tau _x D}+o(n^{2\alpha -2\rho })\Big \}=o(1). \end{aligned}$$

These bounds imply the weak convergence of \(n^{1/2-\beta -\alpha }\sum _{i=1}^n Z_{i1}\). To derive the same result for \(n^{1/2-\beta -\alpha }\sum _{i=1}^n Z_{i2}\), note that, by (2),

$$\begin{aligned}&\text {Var}\Big (n^{1/2-\beta -\alpha }\sum _{i=1}^nZ_{i2}\Big )\\&\quad = n^{2-2\beta -2\alpha }\text {Var}(Z_{02})+n^{1-2\beta -2\alpha }\sum _{i\ne j}\text {Cov}(Z_{i2},Z_{j2})\\&\quad = n^{2-2\beta -2\alpha }\text {E}\varepsilon _0^2\text {E}\big (K(d(x,X_0)/H_n)-K(d(x,X_0)/h_n)\big )^2\\&\qquad +\,n^{1-2\beta -2\alpha }\sum _{i\ne j}\gamma _\varepsilon (i-j)\text {E}\big [(K(d(x,X_i)/H_n)-K(d(x,X_i)/h_n))\\&\qquad \cdot \, (K(d(x,X_j)/H_n)-K(d(x,X_j)/h_n))\big ]\\&\quad \le Cn^{2-2\beta -2\alpha }\text {E}\big [P(\xi _{n1}\le d(x, X_0)\le \xi _{n2})\big ]\\&\qquad +\,n^{1-2\beta -2\alpha }\sum _{i\ne j}\gamma _\varepsilon (i-j)\text {E}\big [P(\xi _{n1}\le d(x, X_i)\le \xi _{n2}, \xi _{n1}\le d(x, X_j)\le \xi _{n2})\big ]\\&\quad \le C n^{2-2\beta -2\alpha }\text {E}(\xi _{n2}-\xi _{n1})\\&\qquad +\,Cn^{1-2\beta -2\alpha }\sum _{i\ne j}\gamma _\varepsilon (i-j)\big (\gamma _x(i-j)+\text {E} [F_x(\xi _{n2})-F_x(\xi _{n1})]^2\big )\\&\quad = o(n^{1-2\beta -\alpha -\rho })+O(n^{1-2\beta -2\alpha })\big (O(n^{2\beta +1-\tau _x D})+o(n^{2\beta +2\alpha -1-2\rho })\big )\\&\quad =o(n^{1-2\beta -\alpha -\rho })+O(n^{2-2\alpha -\tau _x D})+o(n^{-2\rho }). \end{aligned}$$

Since \(\rho >0\) and \(\tau _x\ge 1\), Assumption (A2) implies that \(1-2\beta -\alpha -\rho <0\) and \(2-2\alpha -\tau _x D<0\). That is,

$$\begin{aligned} \text {Var}\Big (n^{1/2-\beta -\alpha }\sum _{i=1}^nZ_{i2}\Big )=o(1). \end{aligned}$$

This completes the proof of Lemma 1. \(\square \)

Proof of Theorem 1

By (1) and (6),

$$\begin{aligned}&n^{1/2-\beta }(r_n(x,H_n)-r(x))\nonumber \\&\quad =n^{1/2-\beta }k_n^{-1}\sum _{i=1}^n K(d(x,X_i)/h_n)(Y_i-r(x))+g_n(x, H_n)\nonumber \\&\quad =n^{1/2-\beta }k_n^{-1}\sum _{i=1}^n K(d(x,X_i)/h_n)(r(X_i)-r(x)+\varepsilon _i)+g_n(x, H_n)\nonumber \\&\quad =I_{n1}+I_{n2}+g_n(x, H_n) \end{aligned}$$

where \(I_{n1}=n^{1/2-\beta }k_n^{-1}\sum _{i=1}^n K(d(x,X_i)/h_n)(r(X_i)-r(x))\) and \(I_{n2}=n^{1/2-\beta }\) \(k_n^{-1}\) \(\sum _{i=1}^n K(d(x,X_i)/h_n)\varepsilon _i\).

By Lemma 1, it suffices to show that

$$\begin{aligned} I_{n1}{\mathop {\longrightarrow }\limits ^\mathcal{P}} 0, \quad \text {and}\quad I_{n2}{\mathop {\longrightarrow }\limits ^\mathcal{D}} c_0 Z. \end{aligned}$$
(13)

Along the similar lines of the Proof of Lemma 1, we obtain

$$\begin{aligned} \text {E}I_{n1}= & {} n^{1/2-\beta }k_n^{-1}n \text {E}[K(d(x,X_0)/h_n)(r(X_0)-r(x))]\nonumber \\\le & {} Cn^{3/2-\beta -\alpha }\int _0^{h_n} u dF_x(u)=Cn^{3/2-\beta -\alpha } h_n\zeta _nf_x(\zeta _n)\nonumber \\= & {} O(n^{\alpha -\beta -1/2})=o(1), \end{aligned}$$

where \(0<\zeta _n<h_n\). Moreover, again by Assumption (A2),

$$\begin{aligned} \text {Var}(I_{n1})\le & {} n^{2-2\beta -2\alpha }\text {E}\big (K(d(x,X_0)/h_n)(r(X_0)-r(x))\big )^2+n^{1-2\beta -2\alpha }\sum _{i\ne j}\nonumber \\&\text {Cov}\left[ (r(X_i)-r(x))K(d(x,X_i)/h_n), (r(X_j)-r(x))K(d(x,X_j)/h_n)\right] \nonumber \\= & {} n^{2-2\beta -2\alpha }O(h_n^3)+n^{1-2\beta -2\alpha }\Big \{O(h_n^2)\sum _{i\ne j}\gamma _x(i-j)+O(n^2h_n^4)\Big \}\nonumber \\= & {} O(n^{\alpha -2\beta -1})+n^{1-2\beta -2\alpha }\big (O(n^{2\alpha -\tau _x D})+O(n^{4\alpha -2})\big )\nonumber \\= & {} O(n^{\alpha -2\beta -1})+O(n^{1-2\beta -\tau _x D})+O(n^{2\alpha -2\beta -1})=o(1). \end{aligned}$$
(14)

Then we arrive at \(I_{n1}{\mathop {\longrightarrow }\limits ^\mathcal{P}} 0\). For \(I_{n2}\), note that

$$\begin{aligned} I_{n2}= & {} n^{1/2-\beta }k_n^{-1}\sum _{i=1}^n \big (K(d(x,X_i)/h_n)-\text {E}(K(d(x,X_i)/h_n))\big )\varepsilon _i\nonumber \\&+\,n^{1/2-\beta }k_n^{-1}\sum _{i=1}^n \text {E}(K(d(x,X_i)/h_n))\varepsilon _i. \end{aligned}$$
(15)

Since

$$\begin{aligned} \text {E}(K(d(x,X_i)/h_n))=\int _0^{h_n} dF_x(u)=F_x(h_n)=h_nf_x(\zeta _n)=h_n(f_x(0)+o(1)), \end{aligned}$$

where \(0<\zeta _n<h_n\), by (9), we have

$$\begin{aligned} n^{1/2-\beta }k_n^{-1}\sum _{i=1}^n \text {E}(K(d(x,X_i)/h_n))\varepsilon _i {\mathop {\longrightarrow }\limits ^\mathcal{D}} c_0 Z. \end{aligned}$$

Therefore, to complete the proof, it suffices to show the first term of (15) tends to 0 in probability. In a similar way as in (14), the variance for the first term of (15) is bounded by

$$\begin{aligned}&n^{2-2\beta -2\alpha }\text {E}\big (K(d(x,X_0)/h_n)\big )^2\text {E}\varepsilon _0^2\\&\qquad +\,n^{1-2\beta -2\alpha }\sum _{i\ne j}\gamma _{\varepsilon }(i-j) \text {Cov}\left[ K(d(x,X_i)/h_n), K(d(x,X_j)/h_n)\right] \\&\quad \le n^{2-2\beta -2\alpha }O(h_n)+Cn^{1-2\beta -2\alpha }\sum _{i\ne j}\gamma _\varepsilon (i-j)\gamma _x(i-j)\\&\quad =O(n^{1-2\beta -\alpha })+O(n^{2-2\alpha -\tau _x D})=o(1). \end{aligned}$$

This concludes the Proof of Theorem 1. \(\square \)

Proof of Theorem 2

From (6), we obtain

$$\begin{aligned}&r_n(x, H_n)-r(x)\\&\quad =k_n^{-1}\sum _{i=1}^n K(d(x, X_i)/H_n)(r(X_i)-r(x))+k_n^{-1}\sum _{i=1}^n K(d(x, X_i)/H_n)\varepsilon _i\\&\quad :=R_1(x)+R_2(x), \quad \text {say}. \end{aligned}$$

It suffices to show that

$$\begin{aligned} \sup _{x\in S} |R_1(x)|=o_P(1),\quad \text {and}\quad \sup _{x\in S} |R_2(x)|=o_P(1). \end{aligned}$$
(16)

By (8) and Assumption (A4), we have

$$\begin{aligned} \sup _{x\in S} |R_1(x)|\le \sup _{x\in S} \sup _{y\in B(x, H_n)}|r(y)-r(x)|\le CH_n=O(h_n)\quad \text {a.s.} \end{aligned}$$

This is enough to prove the first claim of (16). It just remains to check the second result of (16). Note that we can write

$$\begin{aligned} \sup _{x\in S} |R_2(x)|\le \sup _{x\in S} |R_2(x)-R_2(t_x)|+\sup _{x\in S} |R_2(t_x)|, \end{aligned}$$
(17)

while,

$$\begin{aligned} \sup _{x\in S} |R_2(x)-R_2(t_x)|\le \sup _{x\in S} k_n^{-1}\sum _{i=1}^n \big |K(d(x, X_i)/H_n)-K(d(t_x, X_i)/H_n)\big ||\varepsilon _i|. \end{aligned}$$

By (8) and Assumption (B2), we have

$$\begin{aligned}&\text {E} \Big [\big |K(d(x, X_0)/H_n)-K(d(t_x, X_0)/H_n)\big |\Big ]\nonumber \\&\quad = \text {P}((X_0\in B(x, h_n)\bigcap \bar{B}(t_x, h_n))+\text {P}(X_0\in \bar{B}(x, h_n)\bigcap B(t_x, h_n))\nonumber \\&\quad =O(l_n) \end{aligned}$$

for any \(x\in S\). This result implies that, for any \(x\in S\) and any \(1\le i\le n\),

$$\begin{aligned} \big |K(d(x, X_i)/H_n)-K(d(t_x, X_i)/H_n)\big |=O_P(l_n). \end{aligned}$$

Moreover, for any \(\delta >0\), there exists an \(x^*\in S\) with

$$\begin{aligned}&\sup _{x\in S}\big |K(d(x, X_i)/H_n)-K(d(t_x, X_i)/H_n)\big |\\&\quad < |K(d(x^*, X_i)/H_n)-K(d(t_{x^*}, X_i)/H_n)\big |+\delta . \end{aligned}$$

Let \(\delta \rightarrow 0\), we obtain

$$\begin{aligned} \sup _{x\in S}\big |K(d(x, X_i)/H_n)-K(d(t_x, X_i)/H_n)\big |=O_P(l_n). \end{aligned}$$

This leads directly to

$$\begin{aligned} \sup _{x\in S} |R_2(x)-R_2(t_x)|=O_P(l_nk_n^{-1}n)=O_P(n^{1-\alpha -\xi })=o_P(1). \end{aligned}$$
(18)

Looking at the second term on the RHS of (17), we have, for any \(\varepsilon >0\),

$$\begin{aligned}&\text {P}\big (\sup _{x\in S} |R_2(t_x)|>\varepsilon \big )\\&\quad =\text {P}\big (\max _{k\in \{1,\ldots , {z_n}\} } |R_2(t_k)-\text {E}(R_2(t_k))|>\varepsilon \big )\\&\quad \le z_n\max _{k\in \{1,\ldots , {z_n}\} }\text {P}\big ( |R_2(t_k)-\text {E}(R_2(t_k))|>\varepsilon \big ). \end{aligned}$$

By using the similar lines of the proof of (13), and by Assumptions (A2), (B1) and (B3), one gets directly for any \(\varepsilon >0\),

$$\begin{aligned} \text {P}\big (\sup _{x\in S} |R_2(t_x)|>\varepsilon \big )\le z_n\text {Var}\big ( R_2(t_1)\big )/\varepsilon ^2 =O(l_n^{-1})O(n^{2\beta -1})=o(1). \end{aligned}$$

This leads to

$$\begin{aligned} \sup _{x\in S} |R_2(t_x)|=o_P(1). \end{aligned}$$

This, together with (18), is enough to get

$$\begin{aligned}\sup _{x\in S} |R_2(x)|=o_P(1).\end{aligned}$$

This allows us to finish the Proof of Theorem 2. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, L. Nearest neighbors estimation for long memory functional data. Stat Methods Appl 29, 709–725 (2020). https://doi.org/10.1007/s10260-019-00499-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10260-019-00499-1

Keywords

Mathematics Subject Classification

Navigation