Skip to main content
Log in

Asymptotic properties of the kernel estimate of spatial conditional mode when the regressor is functional

  • Original Paper
  • Published:
AStA Advances in Statistical Analysis Aims and scope Submit manuscript

Abstract

The kernel method estimator of the spatial modal regression for functional regressors is proposed. We establish, under some general mixing conditions, the \(L^p\)-consistency and the asymptotic normality of the estimator. The performance of the proposed estimator is illustrated in a real data application.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Biau, G., Cadre, B.: Nonparametric spatial prediction. Stat. Inference Stoch. Process. 7, 327–349 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  • Bosq, D.: Linear processes in function spaces: theory and applications. Lecture Notes in Statistics. Springer, New York (2000)

    Book  Google Scholar 

  • Cressie, N.A.: Statistics for spatial data. Wiley, New York (1993)

    Google Scholar 

  • Carbon, M., Hallin, M., Tran, L.T.: Kernel density estimation for random fields: the \(L^1\) -theory. J. Nonparametr. Stat. 6, 157–170 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  • Carbon, M., Tran, L.T., Wu, B.: Kernel density estimation for random fields. Stat. Probab. Lett. 36, 115–125 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  • Carbon, M., Francq, C., Tran, L.T.: Kernel regression estimation for random fields. J. Stat. Plan. Inference 137, 778–798 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  • Collomb, G., Härdle, W., Hassani, S.: A note on prediction via conditional mode estimation. J. Stat. Plan. Inference 15, 227–236 (1987)

    Article  MATH  Google Scholar 

  • Dabo-Niang, S., Laksaci, A.: Note on conditional mode estimation for functional dependent data. Statistica 70, 83–94 (2010)

    Google Scholar 

  • Dabo-Niang, S., Thiam, B.: Robust quantile estimation and prediction for spatial processes. Stat. Probab. Lett. 80, 1447–1458 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  • Dabo-Niang, S., Rachdi, M., Yao, A.-F.: Spatial kernel regression estimation and prediction for functional random fields. Far. East. J. Stat. 37, 77–113 (2011a)

    MATH  MathSciNet  Google Scholar 

  • Dabo-Niang, S., Kaid, Z., Laksaci, A.: Sur la régression quantile pour variable explicative fonctionnelle : Cas des données spatiales. C. R. Math Acad. Sci. Paris. 349, 1287–1291 (2011b)

    Article  MATH  MathSciNet  Google Scholar 

  • Dabo-Niang, S., Kaid, Z., Laksaci, A.: On spatial conditional mode estimation for a functional regressor. Stati. Probab. Lett. 82, 1413–1421 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  • Dabo-Niang, S., Yao, A.F.: Spatial kernel regression estimation. Math. Method Stat. 16, 1–20 (2007)

  • Delicado, P., Giraldo, R., Comas, C., Mateu, J.: Statistics for spatial functional data: some recent contributions. Environmetrics 21, 224–239 (2010)

  • Diggle, P., Ribeiro, P.J.: Model-based geostatistics. Springer, New York (2007)

    MATH  Google Scholar 

  • Ezzahrioui, M., Ould-Saíd, E.: Asymptotic normality of a nonparametric estimator of the conditional mode function for functional data. J. Nonparametr. Stat. 20, 3–18 (2008)

    Article  MathSciNet  Google Scholar 

  • Fan, J., Gijbels, I.: Local polynomial modelling and its applications. Monographs on Statistics and Applied Probability, xvi, p. 341. Chapman & Hall, London (1996)

    MATH  Google Scholar 

  • Ferraty, F., Vieu, P.: Nonparametric functional data analysis. Theory and practice. Springer, New York (2006)

    MATH  Google Scholar 

  • Ferraty, F., Mas, A., Vieu, P.: Nonparametric regression on functional data: inference and practical aspects. Aust. N. Z. J. Stat 49(3), 267–286 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  • Ferraty, F., Laksaci, A., Tadj, A., Vieu, P.: Rate of uniform consistency for nonparametric estimates with functional variables. J. Stat. Plan. Inference 140, 335–352 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  • Gao, J., Lu, Z., Tjístheim, D.: Moment inequalities for spatial processes. Stat. Probab. Lett. 78, 687–697 (2008)

    Article  MATH  Google Scholar 

  • Giraldo, R. (2009) Geostatistical analysis of functional data. Ph.D. thesis. Universitat Politícnica de Catalunya

  • Guyon, X. (1987) Estimation d’un champ par pseudo-vraisemblance conditionnelle: Etude asymptotique et application au cas Markovien. In: Proceedings of the Sixth Franco-Belgian Meeting of Statisticians

  • Laksaci, A., Maref, F.: Estimation non paramétrique de quantiles conditionnels pour des variables fonctionnelles spatialement dépendantes. C. R., Math. Acad. Sci. Paris. 347, 1075–1080 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  • Laksaci, A., Madani, F. and Rachdi, M.: Kernel conditional density estimation when the regressor is valued in a semi-metric space Comm. Statist. Theory Methods 42, 3544–3570 (2013)

  • Li, J., Tran, L.T.: Nonparametric estimation of conditional expectation. J. Statt. Plan. Inference 139, 164–175 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  • Lu, Z., Chen, X.: Spatial kernel regression: weak consistency. Stat. Probab. Lett. 68, 125–136 (2004)

    Article  MATH  Google Scholar 

  • Ould Abdi, A., Diop, A., Dabo-Niang, S., Ould Abdi, S.A.: Estimation non paramétrique du mode conditionnel dans le cas spatial. C. R. Math. Acad. Sci. Paris 348, 815–819 (2010b)

    Article  MATH  MathSciNet  Google Scholar 

  • Ramsay, J.O., Silverman, B.W.: Appl. Funct. Data Anal. Springer, New York (2002)

    Google Scholar 

  • Ramsay, J.O., Silverman, B.W.: Functional data analysis, 2nd edn. Springer, New York (2005)

    Google Scholar 

  • Ramsay, J. FDA problems that I like to talk about. Personal communication (2008)

  • Ripley, B.: Spatial statistics. Wiley, New York (1981)

    Book  MATH  Google Scholar 

  • Roussas, G.G.: On some properties of nonparametric estimates of probability density functions. Bull. Soc. Math. Grèce (N.S.) 9, 29–43 (1968)

    MATH  MathSciNet  Google Scholar 

  • Tran, L.T.: Kernel density estimation on random fields. J. Multivar. Anal. 34, 37–53 (1990)

  • Volkonski, V.A., Rozanov, Y.A.: Some limit theorems for random functions. I. Teor. Veroyatn. Primen. 4,186–207 (1959)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Laksaci.

Appendix

Appendix

We first state the following lemmas which are due to Carbon et al. (1997). They are needed for the convergence of our estimates. Their proofs will then be omitted.

Lemma 7

Suppose \(E_1,\dots ,E_r\) be sets containing \(m\) sites each with \({\text{ d }ist}(E_i,\ E_j)\ge \gamma \) for all \(i\ne j\) where \(1\le i\le r\) and \(1\le j\le r\). Suppose \(Z_1,\dots ,Z_r\) is a sequence of real-valued r.v.’s measurable with respect to \(\mathcal {B}(E_1),\dots ,\mathcal {B}(E_r)\), respectively, and \(Z_i\) takes values in \([a,b]\). Then there exists a sequence of independent r.v.’s \(Z_1^*,\dots ,Z_r^*\) independent of \(Z_1,\dots ,Z_r\) such that \(Z_i^*\) has the same distribution as \(Z_i\) and satisfies

$$\begin{aligned} \sum _{i=1}^r E|Z_i-Z_i^*|\le 2r(b-a)\psi ((r-1)m,m)\varphi (\gamma ). \end{aligned}$$

Lemma 8

  1. (i)

    Suppose that (2) holds. Denote by \(\mathcal {L}_r(\mathcal {F})\) the class of \(\mathcal {F}\)-measurable r.v.’s (random variables) \(X\) satisfying \(\Vert X\Vert _r=(E|X|^r)^{1/r}<\infty \). Suppose \(X \in \mathcal {L}_r(\mathcal {B}(E))\) and \(Y \in \mathcal {L}_s(\mathcal {B}(E'))\). Assume also that \(1\le r,\ s,\ t<\infty \) and \(r^{-1}+s^{-1}+t^{-1}=1\). Then

    $$\begin{aligned} |EXY-EXEY|\le C\Vert X\Vert _r \Vert Y\Vert _s \{\psi (\mathrm{Card}(E),\mathrm{Card}(E'))\varphi (\text {dist}(E,E'))\}^{1/t}. \end{aligned}$$
    (10)
  2. (ii)

    For r.v.’s bounded with probability 1, the right-hand side of (10) can be replaced by

    $$\begin{aligned} C\psi (\mathrm{Card}(E),\mathrm{Card}(E'))\varphi (\text {dist}(E,E')). \end{aligned}$$

Proof of Lemma 1

We have

$$\begin{aligned} \left\| \widehat{f}^{x\, (1)}_N(\theta (x))-E\left[ \widehat{f}^{x\, (1)}_N(\theta (x))\right] \right\| _p= \frac{1}{\widehat{\mathbf{n }}\, h_H^2EK_\mathbf{1 }}\left\| \sum _\mathbf{i \in \mathbf I _\mathbf n } \theta _\mathbf{i }\right\| _p \end{aligned}$$

where

$$\begin{aligned}&\theta _\mathbf{i }=K_\mathbf i H_\mathbf i ^{(1)}(\theta (x)))-E\left[ K_\mathbf i H_\mathbf i ^{(1)}(\theta (x)))\right] . \\&K_\mathbf i =K(h_K^{-1}d(x,X_\mathbf{i })),\, H_\mathbf i =K(h_H^{-1}(y-Y_\mathbf{i }). \end{aligned}$$

We have \( EK_\mathbf 1 =O(\phi _x(h_K)),\) (because of H6), so it remains to show that

$$\begin{aligned} \left\| \sum _\mathbf{i \in \mathbf I _\mathbf n } \theta _\mathbf{i }\right\| _p=O(\sqrt{\widehat{\mathbf{n }}h_H\phi _x(h_K)}). \end{aligned}$$

Clearly, if we show the claimed result for \(p=2r\), we get the other lower values of \(p\) by the Holder inequality. Thus, it suffices to prove the case \(p=2r\). To do that we use the same ideas of Gao et al. (2008) and Ould Abdi et al. (2010b). Indeed, we have

$$\begin{aligned} \left[ \left( \sum _\mathbf{i \in \mathcal {I}_\mathbf{n }}\theta _\mathbf{i }\right) ^{2r}\right] =\sum _\mathbf{i \in \mathcal {I}_\mathbf{n }}E\left[ \theta _\mathbf{i }^{2r}\right] +\sum _{s=1}^{2r-1}\sum _{\nu _0+\nu _1+\dots +\nu _s=2r}V_s(\nu _0,\nu _1,\dots ,\nu _s) \end{aligned}$$

where \(\sum _{\nu _0+\nu _1+\dots +\nu _s=2r}\) is the summation over \((\nu _0,\nu _1,\dots ,\nu _s)\) with positive integer components satisfying \(\nu _0+\nu _1+\dots +\nu _s=2r\) and

$$\begin{aligned} V_s(\nu _0,\nu _1,\dots ,\nu _s)=\sum _\mathbf{i _0\ne \mathbf i _1\ne \dots \ne \mathbf i _s\in \mathcal {I}_\mathbf{n }}E\left[ \theta _\mathbf{i _0}^{\nu _0}\theta _\mathbf{i _1}^{\nu _1}\dots \theta _\mathbf{i _s}^{\nu _s}\right] . \end{aligned}$$

Firstly, by stationarity, we have

$$\begin{aligned} \sum _\mathbf{i \in I_\mathbf{n }}E\left( \theta _\mathbf{i }\right) ^{2r} \le C\widehat{\mathbf{n }}E\left( \left| \theta _\mathbf{i }\right| \right) ^{2r} \le \widehat{\mathbf{n }}E\left( K_\mathbf i H_\mathbf i ^{(1)}(\theta (x)))\right) ^{2r}\le C\widehat{\mathbf{n }} h_H\phi _x(h_K). \end{aligned}$$

Now, we control the second term \(V_s(\nu _0,\nu _1,\dots ,\nu _s)\). For this, we distinguished the two cases \(s< r \) and \(s\ge r\). The second case can be evaluated by straightforward modification of the proof of Lemma 3.4 of Gao et al. (2008) by taking into account the fact that

$$\begin{aligned} E\left| \theta _\mathbf{i _0}^{\nu _0}\theta _\mathbf{i _2}^{\nu _2}\dots \theta _\mathbf{i _s}^{\nu _s}\right| \le Ch_H^{1+s}\phi _x^{1+v_{s}}(h_K). \end{aligned}$$

It follows that, for a certain a real sequence \(P:=P_\mathbf{n }\) we have

$$\begin{aligned} V_s(\nu _0,\nu _1,\dots ,\nu _s) \le C(\widehat{\mathbf{n }})^r\left( P^{Nr}h_H^{1+s}\phi _x(h_K)^{(1+v_{s})}+P^{Nr-1-\delta }\right) . \end{aligned}$$

So, if we take \(P=\phi _x(h_K)^{-(1+v_{Nr})/1+\delta )}\), we obtain that

$$\begin{aligned} \displaystyle V_s(\nu _0,\nu _1,\dots ,\nu _s)=O\left( \left( \widehat{\mathbf{n }}h_H\phi _x(h_K)\right) ^{r}\right) ,\qquad \text{ for }\, \qquad r\le s\le 2r-1. \end{aligned}$$

Next, the case \(s>r\) is evaluated by the same way as in Lemma 3.3 in Gao et al. (2008). Indeed, we denote by

$$\begin{aligned} V_{s1}=\sum _\mathbf{i _0\ne \mathbf i _1\ne \cdots \ne \mathbf i _s}\left[ E\left( \prod _{j=0}^s\theta _\mathbf{i _j}^{\nu _j}\right) -\prod _{j=0}^s E \theta _\mathbf{i _j}^{\nu _j}\right] \, \text{ and } \, V_{s2}=\sum _\mathbf{i _0\ne \mathbf i _1\ne \cdots \ne \mathbf i _s}\prod _{j=0}^s E \theta _\mathbf{i _j}^{\nu _j}. \end{aligned}$$

So,

$$\begin{aligned} V_s(\nu _0,\nu _1,\dots ,\nu _s)=V_{s1}+V_{s2}. \end{aligned}$$

It is clear that,

$$\begin{aligned} \displaystyle \left| V_{s2}\right| \le C\left( \widehat{\mathbf{n }}h_H\phi _x(h_K)\right) ^{s+1}. \end{aligned}$$

Furthermore for the term \(V_{s1}\), we have

$$\begin{aligned} E\left( \prod _{j=0}^s\theta _\mathbf{i _j}^{\nu _j}\right) -\prod _{j=0}^s E \theta _\mathbf{i _j}^{\nu _j}&= \sum _{l=0}^{s-1}\left( \prod _{j=0}^{l-1} E \theta _\mathbf{i _j}^{\nu _j}\right) \\&\quad \times \left( E\left[ \prod _{j=l}^s\theta _\mathbf{i _j}^{\nu _j}\right] -E\left[ \theta _\mathbf{i _l}^{\nu _l}\right] E\left[ \prod _{j=l+1}^s\theta _\mathbf{i _j}^{\nu _j}\right] \right) \end{aligned}$$

where we define \( \prod _{j=l}^s . =1\) if \(l>s\). Then we obtain

$$\begin{aligned} \left| V_{s1}\right|&\le \sum _{l=0}^{s-1}\left( \widehat{\mathbf{n }}h_H\phi _x(h_K)\right) ^l\sum _\mathbf{i _l\ne \cdots \ne \mathbf i _s} \left| E\left[ \prod _{j=l}^s\xi _\mathbf{i _j}^{\nu _j}\right] -E\left[ \xi _\mathbf{i _l}^{\nu _l}\right] E\left[ \prod _{j=l+1}^s\xi _\mathbf{i _j}^{\nu _j}\right] \right| \\&= \sum _{l=0}^{s-1}\left( \widehat{\mathbf{n }}h_H\phi _x(h_K)\right) ^lV_{ls1}. \end{aligned}$$

We have

$$\begin{aligned} V_{ls1}&= \sum _{0<\text {dist}\left( \{\mathbf{i }_l\},\{\mathbf{i }_{l+1},\dots ,\mathbf{i }_s\}\right) \le P}\left| E\left[ \prod _{j=l}^s\theta _{\mathbf{i }_j}^{\nu _j}\right] -E\left[ \theta _{\mathbf{i }_l}^{\nu _l}\right] E\left[ \prod _{j=l+1}^s\theta _{\mathbf{i }_j}^{\nu _j}\right] \right| \\&\quad +\sum _{0<{\text{ d }ist}\left( \{\mathbf{i }_l\},\{\mathbf{i }_{l+1},\dots ,\mathbf{i }_s\}\right) > P}\left| E\left[ \prod _{j=l}^s\theta _{\mathbf{i }_j}^{\nu _j}\right] -E\left[ \theta _{\mathbf{i }_l}^{\nu _l}\right] E\left[ \prod _{j=l+1}^s\theta _{\mathbf{i }_j}^{\nu _j}\right] \right| \\&:= V_{ls11}+V_{ls12}. \end{aligned}$$

Once again by (H2) we have

$$\begin{aligned} \left| E\left[ \prod _{j=l}^s\theta _{\mathbf{i }_j}^{\nu _j}\right] -E\left[ \theta _{\mathbf{i }_l}^{\nu _l}\right] E\left[ \prod _{j=l+1}^s\theta _{\mathbf{i }_j}^{\nu _j}\right] \right| \le Ch_H^{s-l}\phi _x(h_K)^{1+v_{s+1-l}}. \end{aligned}$$

Thus, we have

$$\begin{aligned} V_{ls11}\displaystyle \le Ch_H^{s-l}\phi _x(h_K)^{1+v_{s+1-l}}\widehat{\mathbf{n }}^{(s-l)}P^N \end{aligned}$$

Since the variables \(\theta _{\mathbf{i }}\) are bounded, we have (see Lemma 8)

$$\begin{aligned} V_{ls12} \le C\widehat{\mathbf{n }}^{(s-l)}\sum _{t=P+1}^\infty t^{N-1}\varphi (t). \end{aligned}$$

Combining the upper bounds of \(V_{ls11}\) and \(V_{ls12}\), we have

$$\begin{aligned}&\left| V_{s1}\right| \displaystyle \le C\sum _{l=0}^{s-1}\left( \widehat{\mathbf{n }}h_H\phi _x(h_K)\right) ^l\left[ h_H^{s-l}\phi _x(h_K)^{1+v_{s+1-l}}\widehat{\mathbf{n }}^{(s-l)}P^N +\widehat{\mathbf{n }}^{(s-l)}\sum _{t=P+1}^\infty t^{N-1}\varphi (t)\right] \\&\quad \displaystyle \le C\left( \widehat{\mathbf{n }}h_H\phi _x(h_K)\right) ^{(s+1)}\\&\quad \displaystyle \times \sum _{l=0}^{s-1}\left( \widehat{\mathbf{n }}h_H\phi _x(h_K)\right) ^{l-s-1} \left[ h_H^{s-l}\phi _x(h_K)^{1+v_{s+1-l}}\widehat{\mathbf{n }}^{(s-l)}P^N +\widehat{\mathbf{n }}^{(s-l)}\sum _{t=P+1}^\infty t^{N-1}\varphi (t)\right] .\\ \end{aligned}$$

Taking \(P=(h_H\phi _x(h_K))^{-1/N}\), we obtain

$$\begin{aligned} \left| V_{s1}\right| \le C\left( \widehat{\mathbf{n }}h_H\phi _x(h_K)\right) ^{(s+1)}. \end{aligned}$$

\(\square \)

Proof of Lemma 2

It is easy to see that

$$\begin{aligned} E\left[ \widehat{f}^{x\, (1)}_N(\theta (x))\right] -f^{x\, (1)}(\theta (x))&= \frac{1}{h_H^2EK_1}E\left[ K_1 E\left[ H_1^{(1)}(\theta (x))\ | \ X_1\right] \right] \\&\quad -f^{x\, (1)}(\theta (x)). \end{aligned}$$

By an integration by part and the change of variables \(\displaystyle t=\frac{y-z}{h_H}\), we have

$$\begin{aligned}&E\left[ \widehat{f}^{x\, (1)}_N(\theta (x))\right] -f^{x\, (1)}(\theta (x))\\&\quad \le \frac{1}{EK_1}\left( EK_1\int H(t) \left| f^{X_1\, (1)}(\theta (x)-h_Ht)-f^{x\, (1)}(\theta (x))\right| {d}t\right) . \end{aligned}$$

Hypotheses (H4) and (H6) allow to get the desired result. \(\square \)

Proof of Lemma 3

It is clear that, for all \(\epsilon <1\), we have

$$\begin{aligned} P\left( \widehat{f}^x_D=0\right) \le P\left( \widehat{f}^x_D\le 1-\epsilon \right) \le P\left( |\widehat{f}^x_D-E[\widehat{f}^x_D]|\ge \epsilon \right) . \end{aligned}$$

The Markov’s inequality allows to get, for any \(p>0\),

$$\begin{aligned} P\left( |\widehat{f}^x_D-E[\widehat{f}^x_D]|\ge \epsilon \right) \le \frac{E\left[ |\widehat{f}^x_D-E[\widehat{f}^x_D]|^p\right] }{\epsilon ^p}. \end{aligned}$$

So

$$\begin{aligned} \left( P\left( \widehat{F}^x_D=0\right) \right) ^{1/p}=O\left( \left\| \widehat{f}^x_D-E[\widehat{f}^x_D]\right\| _p\right) . \end{aligned}$$

The computation of \( \displaystyle \left\| \widehat{f}^x_D-E[\widehat{f}^x_D]\right\| _p\) can be done by following the same arguments as those invoked to get (5). This yields the proof. \(\square \)

Proof of Lemma 4

We start by writing \(\forall y\in S\)

$$\begin{aligned} E[\widehat{f}^{x\, (1)}_N(y)]&= \frac{1}{E[K_\mathbf{1 }]}E\left[ K_\mathbf{1 }E[h_H^{-2}H_\mathbf{1 }^{\prime }|X]\right] \text{ with } h_H^{-2}E\left[ H_\mathbf{1 }^{\prime }|X\right] \\&= \int _{{\mathrm {I\!R}}}H(t)f^{X\, (1)}(y-h_ {H}t){d}t. \end{aligned}$$

Next, we use a Taylor expansion under (H4), as follows

$$\begin{aligned} h_H^{-2}E\left[ H_\mathbf{1 }^{\prime }|X\right] =f^{X\, (1)}(y)+ \frac{h_H}{2}\left( \int tH(t) {d}t\right) \frac{\partial ^2 f^{X}(y)}{\partial ^2 y} +o(h_H) . \end{aligned}$$

Thus, we get

$$\begin{aligned} E\left[ \widehat{f}^{x\, (1)}_N(y)\right]&= \frac{1}{E[K_\mathbf{1 }]}\left( E\left[ K_\mathbf{1 }f^{X\, (1)}(y)\right] \right. \\&\quad \left. +\,h_H\left( \int tH(t) {d}t\right) E\left[ K_\mathbf{1 }\frac{\partial ^2 f^{X}(y)}{\partial ^2 y}\right] +o(h_H)\right) . \end{aligned}$$

Noting \(\psi (\cdot ,y):=\frac{\partial f^{\cdot }(y)}{\partial y }\): since \(\Phi (0)=0\), we have

$$\begin{aligned} E\left[ K_\mathbf{1 }\psi (X,y)\right]&= \psi (x,y) E[K_\mathbf{1 }]+E\left[ K_\mathbf{1 }\left( \psi (X,y)-\psi (x,y)\right) \right] \\&= \psi (x,y) E[K_\mathbf{1 }]+E\left[ K_\mathbf{1 }\left( \Phi (d(x,X)\right) \right] \\&= \psi (x,y) E[K_\mathbf{1 }]+\displaystyle \Phi ^{\prime }(0) E\left[ d(x,X)K_\mathbf{1 }\right] + o(E\left[ d(x,X)K_\mathbf{1 }\right] ). \end{aligned}$$

Therefore,

$$\begin{aligned} E\left[ \widehat{f}^{x\, (1)}_N(y)\right]&= f^{x\, (1)}(y)+ \frac{h_H}{2}\frac{\partial ^2 f^x(y)}{\partial y^2}\int tH(t) {d}t+o\left( h_H^2\frac{E\left[ d(x,X)K_\mathbf{1 }\right] }{E[K_\mathbf{1 }]}\right) \\&\quad +\displaystyle \Phi ^{\prime }_0(0)\frac{E\left[ d(x,X)K_\mathbf{1 }\right] }{E[K_\mathbf{1 }]}+o\left( \frac{E\left[ d(x,X)K_\mathbf{1 }\right] }{E[K_\mathbf{1 }]}\right) . \end{aligned}$$

By using the same idea as Ferraty et al. (2007), we obtain that

$$\begin{aligned} \frac{1}{\phi _x(h_K)}E\left[ d(x,X)K_\mathbf{1 }\right] =h_K\left( K(1)-\int _0^1 (sK(s))'\beta _x(s){d}s+ o(1)\right) \end{aligned}$$

and

$$\begin{aligned} \frac{1}{\phi _x(h_K)}E\left[ K_\mathbf{1 }\right] =K(1)-\int _0^1K'(s)\beta _x(s)ds +o(1). \end{aligned}$$

So,

$$\begin{aligned} E\left[ \widehat{f}_N^{x\, (1)}(y)\right]&= f^{x\, (1)}(y)+ \frac{h_H}{2}\frac{\partial ^2 f^x(y)}{\partial y^2}\int tH(t) dt\\&\quad +\displaystyle h_K\Phi ^{\prime }(0)\frac{\left( K(1)-\int _0^1 (sK(s))'\beta _x(s)ds\right) }{\left( K(1)-\int _0^1K'(s)\beta _x(s)ds\right) }+o(h_H)+o(h_K) . \end{aligned}$$

In particular for \(y=\theta (x)\), we obtain the desired result of the lemma. \(\square \)

Proof of Lemma 5

For the variance term \(\mathrm{Var}[\widehat{f}^x_N(y)]\), we have

$$\begin{aligned} \mathrm{Var}[\widehat{f}^{x\, (1)}_N(y)]=\frac{1}{(h_H^2\widehat{\mathbf{n }} E\left[ K_\mathbf{1 }\right] )^2}\mathrm{Var}\left[ \sum _{\mathbf{i \in I_\mathbf{n }}} D_\mathbf{i } \right] \end{aligned}$$

where

$$\begin{aligned} D_\mathbf{i }=K_\mathbf{i }H_\mathbf{i }^{\prime }-E\left[ K_\mathbf{i }H_\mathbf{i }^{\prime }\right] . \end{aligned}$$

Thus

$$\begin{aligned} \mathrm{Var}[\widehat{f}^{x\, (1)}_N(y)]=\frac{1}{\widehat{\mathbf{n }}(h_H^2E\left[ K_\mathbf{1 }\right] )^2} \mathrm{Var}\left[ D_\mathbf{1 }\right] +\frac{1}{(h_H^2\widehat{\mathbf{n }}E\left[ K_\mathbf{1 }\right] )^2}\sum _{\mathbf{i }\not =\mathbf{j }} \mathrm{Cov}(D_\mathbf{i },D_\mathbf{j }). \end{aligned}$$

Let us calculate the quantity \(\mathrm{Var}\left[ D_\mathbf{1 }\right] \). We have:

$$\begin{aligned} \mathrm{Var}\left[ D_\mathbf{1 }\right]&= E\left[ K_\mathbf{1 }^2H_\mathbf{1 }^{\prime ^2}\right] -\left( E\left[ K_\mathbf{1 } H_\mathbf{1 }^{\prime }\right] \right) ^2\\&= E\left[ K_\mathbf{1 }^2\right] \frac{E\left[ K_\mathbf{1 }^2H_\mathbf{1 }^{\prime ^2}\right] }{E\left[ K_\mathbf{1 }^2\right] }-(E\left[ K_\mathbf{1 }\right] )^2\left( \frac{E\left[ K_\mathbf{1 }H_\mathbf{1 }^{\prime }\right] }{E\left[ K_\mathbf{1 }\right] }\right) ^2. \end{aligned}$$

So, by using the same arguments as those used in the previous lemma, we get

$$\begin{aligned} \begin{array}{ll} &{}\displaystyle \frac{1}{\phi _x(h_K)}E\left[ K_\mathbf{1 }^2\right] \displaystyle =K^2(1)-\int _0^1 (K^2(s))'\beta _x(s))ds +o(1) \\ &{}\displaystyle \frac{E\left[ K_\mathbf{1 }^2H_\mathbf{1 }^{\prime ^2}\right] }{E\left[ K_\mathbf{1 }^2\right] } \displaystyle = h_Hf^x(y)\int H^{\prime ^2}(t)dt +o(h_H) \\ &{}\displaystyle \frac{E[K_\mathbf{1 }H_\mathbf{1 }^{\prime }]}{E\left[ K_\mathbf{1 }\right] }\displaystyle =h_Hf^x(y)\int H^{\prime }(t)dt+o(h_H) \end{array} \end{aligned}$$

which implies that

$$\begin{aligned} \mathrm{Var}\left[ D_\mathbf{i }\right]&= h_H\phi _x(h_K)f^x(y)\int H^{\prime ^2}(t)dt\left( K^2(1)-\int _0^1 (K^2(s))'\beta _x(s))ds \right) \nonumber \\&\quad +o\left( h_H\phi _x(h_K)\right) . \end{aligned}$$
(11)

Now, let us focus on the covariance term. To do that, we define

$$\begin{aligned} E_1=\{\mathbf{i, j }\in I_{\mathbf{n }}: 0<\left\| \mathbf{i }-\mathbf{j }\right\| \le c_{\mathbf{n }}\}\quad \text{ and } \quad E_2=\{\mathbf{i, j }\in I_{\mathbf{n }}: \left\| \mathbf{i }-\mathbf{j }\right\| > c_{\mathbf{n }}\}. \end{aligned}$$

For all \((\mathbf{i },\mathbf{j })\in E_1^2 \) we write

$$\begin{aligned} {\text {Cov}}\left( D_{\mathbf{i }},D_{\mathbf{j }}\right) = E\left[ K_{\mathbf{i }}K_{\mathbf{j }}H_{\mathbf{i }}^{\prime }H_{\mathbf{j }}^{\prime }\right] - \left( E\left[ K_{\mathbf{i }}H_{\mathbf{i }}^{\prime }\right] \right) ^2 \end{aligned}$$

and we use the fact that \({\mathrm {I\!E}}\left[ H_\mathbf{i }^{\prime }H_\mathbf{j }^{\prime }|(X_\mathbf{i },X_\mathbf{j } )\right] =O(h_H^2); \; \forall \; \mathbf i \not =\mathbf j \), \({\mathrm {I\!E}}\left[ H_\mathbf{i }^{\prime }|X_\mathbf{i }\right] =O(h_H); \; \forall \; \mathbf i \) , under (H2) and (H5) we get

$$\begin{aligned} E\left[ K_\mathbf{i }K_\mathbf{j }H_\mathbf{i }^{\prime }H_\mathbf{j }^{\prime }\right] \le Ch_H^2P\left[ (X_\mathbf i ,X_\mathbf j ) \in B(x,h_K)\times B(x,h_K)\right] \end{aligned}$$

and

$$\begin{aligned} E\left[ K_\mathbf{i }H_\mathbf{i }^{\prime }\right] \le C h_H P\left( X_\mathbf i \in B(x,h_K)\right) . \end{aligned}$$

It follows that

$$\begin{aligned} \mathrm{Cov}\left( D_\mathbf{i },D_\mathbf{j }\right) \le Ch_H^2\phi _x(h_K)(\phi _x(h_K)+\phi _x^{v_1}(h_K)) \end{aligned}$$

So

$$\begin{aligned} \displaystyle \sum _{E_1}\mathrm{Cov}\left( D_\mathbf{i },D_\mathbf{j }\right) \le C \widehat{\mathbf{n }}c_\mathbf{n }^N h_H^2\phi _x^{1+v_1}(h_K). \end{aligned}$$

On the other hand, Lemma 8 and \(\left| D_\mathbf{i }\right| \le C \), permit to write that for \((\mathbf i ,\mathbf j )\in E_2^2 \)

$$\begin{aligned} \left| \mathrm{Cov}\left( D_\mathbf{i },D_\mathbf{j }\right) \right| \le C \varphi \left( \Vert \mathbf i -\mathbf j \Vert \right) \end{aligned}$$

and

$$\begin{aligned} \sum _{E_2}Cov\left( D_\mathbf{i },D_\mathbf{j }\right)&\le C \sum _{E_2}\varphi \left( \Vert \mathbf i -\mathbf j \Vert \right) \\&\le C \widehat{\mathbf{n }}\sum _\mathbf{i :\Vert \mathbf i \Vert >c_\mathbf{n }}\varphi \left( \Vert \mathbf i \Vert \right) \\&\le C\widehat{\mathbf{n }}c_\mathbf{n }^{-Na} \sum _\mathbf{i :\Vert \mathbf i \Vert >c_\mathbf{n }}\Vert \mathbf i \Vert ^{Na}\varphi \left( \Vert \mathbf i \Vert \right) . \end{aligned}$$

Finally, we have:

$$\begin{aligned} \sum _{\mathbf{i }\not =\mathbf{j }} \mathrm{Cov}(D_\mathbf{i },D_\mathbf{j })\le \left( C\widehat{\mathbf{n }}c_\mathbf{n }^N h_H^2\phi _x^{1+v_1}(h_K)+C\widehat{\mathbf{n }}c_\mathbf{n }^{-Na}\sum _\mathbf{i : \Vert \mathbf i \Vert >c_\mathbf{n }}\Vert \mathbf i \Vert ^{Na}\varphi \left( \Vert \mathbf i \Vert \right) \right) . \end{aligned}$$

Let \(c_\mathbf{n }=(h_H^{2/(a+1)}\phi _x^{1/a}(h_K))^{-1/N}\), then we obtain that

$$\begin{aligned} \sum \mathrm{Cov}\left( D_\mathbf{i },D_\mathbf{j }\right) =o\left( \widehat{\mathbf{n }}h_H\phi _x(h_K)\right) . \end{aligned}$$

In conclusion, we have

$$\begin{aligned} \mathrm{Var}[\widehat{f}^{x\, (1)}_N(y)]&= \frac{f^x(y)}{\widehat{\mathbf{n }} h_H^3\phi _x(h_K) }\left( \int H^{\prime ^2}(t)dt\right) \left( \frac{\left( K^2(1)-\int _0^1 (K^2(s))'\beta _x(s)ds\right) }{\left( K(1)-\int _0^1K'(s)\beta _x(s)ds\right) ^2} \right) \\&\quad +o\left( \frac{1}{\widehat{\mathbf{n }}h_H^3\phi _x(h_K) }\right) . \end{aligned}$$

The result of the lemma correspond to the case \(y=\theta (x).\) \(\square \)

Proof of Lemma 6

We set

$$\begin{aligned} S_{\mathbf{n }}= \sum _{{\mathbf{i }\in \mathbf I _\mathbf{n }}}\Lambda _{\mathbf{i }} \end{aligned}$$

where

$$\begin{aligned} \Lambda _\mathbf{i }:=\frac{\sqrt{h_H\phi _x(h_K)}}{h_H E[K_{\mathbf{1 }}]}D_\mathbf{i }. \end{aligned}$$
(12)

Clearly, we have

$$\begin{aligned} \sqrt{\widehat{\mathbf{n }}h_H^3\phi _x(h_K)}\left[ \sigma (x)\right] ^{-1}\left( \widehat{f}^{x\, (1)}_N(y)-E\widehat{f}^{x\, (1)}_N(y)\right) =\left( \widehat{\mathbf{n }}(\sigma ^2(x))\right) ^{-1/2}S_{\mathbf{n }}. \end{aligned}$$

So, to prove the asymptotic normality of \(\left( \widehat{\mathbf{n }}(\sigma (x))^2\right) ^{-1/2}S_{\mathbf{n }}\), it suffices to show the proof of this lemma. The latter is shown by the blocking method, where the random variables \(\Lambda _{\mathbf{i }}\) are grouped into blocks of different sizes defined as follows

$$\begin{aligned} W(1,\mathbf{n },x,\mathbf{j })=\sum _{{i_k=j_k(p_{\mathbf{n }}+q_{\mathbf{n }})+1,\; k=1,\dots ,N}}^{j_k(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}}\Lambda _{\mathbf{i }} , \end{aligned}$$
$$\begin{aligned} W(2,\mathbf{n },x,\mathbf{j })=\sum _{{i_k=j_k(p_{\mathbf{n }} +q_{\mathbf{n }})+1,\;k=1,\dots ,N-1}}^{j_k(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}}\quad \sum _{i_N=j_N(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}+1}^{(j_N+1)(p_{\mathbf{n }}+q_{\mathbf{n }})}\Lambda _{\mathbf{i }} , \end{aligned}$$
$$\begin{aligned} W(3,\mathbf{n },x,\mathbf{j })\!=\!\sum _{{i_k=j_k(p_{\mathbf{n }}+q_{\mathbf{n }})+1,\; k=1,\dots ,N-2}}^{j_k(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}}\quad \sum _{i_{N-1}=j_{N-1}(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}+1}^{(j_{N-1}+1)(p_{\mathbf{n }} +q_{\mathbf{n }})}\quad \sum _{i_N=j_N(p_{\mathbf{n }}+q_{\mathbf{n }})+1}^{j_N(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}}\Lambda _{\mathbf{i }} , \end{aligned}$$
$$\begin{aligned} W(4,\mathbf{n },x,\mathbf{j })=\sum _{{i_k=j_k(p_{\mathbf{n }}+q_{\mathbf{n }})+1,\; k=1,\dots ,N-2}}^{j_k(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}}\quad \sum _{i_{N-1} =j_{N-1}(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}+1}^{(j_{N-1}+1)(p_{\mathbf{n }}+q_{\mathbf{n }})} \quad \sum _{i_N=j_N(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}+1}^{(j_N+1)(p_{\mathbf{n }}+q_{\mathbf{n }})}\Lambda _{\mathbf{i }} , \end{aligned}$$

and so on. The last two terms are

$$\begin{aligned} W(2^{N-1},\mathbf{n },x,\mathbf{j })=\sum _{{i_k=j_k(p_{\mathbf{n }}+q_{\mathbf{n }}) +p_{\mathbf{n }}+1,\;k=1,\dots ,N-1}}^{(j_k+1)(p_{\mathbf{n }}+q_{\mathbf{n }})}\quad \sum _{i_N=j_N(p_{\mathbf{n }}+q_{\mathbf{n }})+1}^{j_N(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}}\Lambda _{\mathbf{i }} , \end{aligned}$$
$$\begin{aligned} W(2^N,\mathbf{n },x,\mathbf{j })=\sum _{{i_k=j_k(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}+1, \;k=1,\dots ,N}}^{(j_k+1)(p_{\mathbf{n }}+q_{\mathbf{n }})}\Lambda _{\mathbf{i }} \end{aligned}$$

where \( q_{\mathbf{n }}=o\left( \left[ \widehat{\mathbf{n }} (h_H\phi _x(h_K))^{(1+2N)}\right] ^{1/(2N)}\right) \) and \(p_{\mathbf{n }}=\left[ (\widehat{\mathbf{n }} h_H\phi _x(h_K))^{1/(2N)}/s_{\mathbf{n }}\right] \) with

\(s_{\mathbf{n }}=o\left( \left[ \widehat{\mathbf{n }} (h_H\phi _x(h_K))^{(1+2N)}\right] ^{1/(2N)}q_{\mathbf{n }}^{-1}\right) \). Because of (H11) all the sequences \(q_{\mathbf{n }}\), \(p_{\mathbf{n }}\) and \(s_{\mathbf{n }}\) tend to infinity.

Now, Define for each \(i=1,\dots ,2^N\),

$$\begin{aligned} T(\mathbf{n },x,i)=\sum _{\mathbf{j }\in \mathcal {J}}W(i,\mathbf{n },x,\mathbf{j }). \end{aligned}$$

where \(\mathcal {J}=\{0,\dots ,r_1-1\}\times \cdots \times \{0,\dots ,r_N-1\}\) with \( r_k=n_k(p_{\mathbf{n }}+q_{\mathbf{n }})^{-N}.\) So, we have

$$\begin{aligned}&\left( \widehat{\mathbf{n }}(\sigma ^2(x))\right) ^{-1/2}S_{\mathbf{n }}\\&\quad =\left[ \sqrt{\widehat{\mathbf{n }}}\sigma (x)\right] ^{-1}\left( T\left( \mathbf{n },x,1\right) +\sum _{i=2}^{2N}T\left( \mathbf{n },x,i\right) \right) . \end{aligned}$$

Thus, the proof of the asymptotic normality of \(\left( \widehat{\mathbf{n }}(\sigma ^2(x))\right) ^{-1/2}S_{\mathbf{n }}\) is reduced to the proofs of the following results

$$\begin{aligned} Q_1\equiv \Big |E\left[ \exp \big [iuT(\mathbf n ,x,1)\big ] \right] -\prod _{\begin{array}{c} j_k=0\\ k=1,\dots ,N \end{array}}^{r_k-1} E\left[ \exp \big [iuW(1,\mathbf n ,x,\mathbf j )\big ]\right] \Big |\rightarrow 0 \end{aligned}$$
(13)
$$\begin{aligned} Q_2\equiv \widehat{\mathbf{n }}^{-1}E\Big [\sum _{i=2}^{2^N}T(\mathbf n ,x,i)\Big ]^2\rightarrow 0 \end{aligned}$$
(14)
$$\begin{aligned} Q_3\equiv \widehat{\mathbf{n }}^{-1}\sum _\mathbf{j \in \mathcal {J}}E\Big [W(1,\mathbf n ,x,\mathbf j )\Big ]^2\rightarrow \sigma ^2(x) \end{aligned}$$
(15)
$$\begin{aligned} Q_4\equiv \widehat{\mathbf{n }}^{-1}\sum _\mathbf{j \in \mathcal {J}}E\Big [(W(1,\mathbf n ,x,\mathbf j ))^2\mathbf 1 _{\{|W(1,\mathbf n ,x,\mathbf j )|>\epsilon \left( \sigma ^2(x) \widehat{\mathbf{n }}\right) ^{1/2}\}}\Big ]\rightarrow 0 \quad \text{ for } \text{ all } \epsilon >0. \end{aligned}$$
(16)

For (13), we numerate the \(M=\prod _{k=1}^N r_k={\widehat{\mathbf{n }}}(p_\mathbf{n }+q_\mathbf{n })^{-N}\le \widehat{\mathbf{n }}p_\mathbf{n }^{-N}\) random variables \(W(1,\mathbf n ,x,\mathbf j ),\; \mathbf j \in \mathcal {J}\) in the arbitrary way \(\tilde{U_1},\dots ,\tilde{U_M}\). For \(\mathbf j \in \mathcal {J}\), let

$$\begin{aligned} I(1,\mathbf n ,x,\mathbf j )=\left\{ \mathbf i :j_k(p_\mathbf{n }+q_\mathbf{n })+1\le i_k\le j_k(p_\mathbf{n }+q_\mathbf{n })+p_\mathbf{n }; \quad k=1,\dots N\right\} \end{aligned}$$

then we have \( W(1,\mathbf n ,x,\mathbf j )=\sum \nolimits _\mathbf{i \in I(1,\mathbf n ,x,\mathbf j )}\Lambda _\mathbf{i }\). Noting that each of the sets of sites \( I(1,\mathbf n ,x,\mathbf j )\) contains \(p_\mathbf{n }^N \), these sets are distant of \(p_\mathbf{n }\) at least. Further, we apply the lemma of Volkonski and Rozanov (1959) to the variables \(\left( \exp (iu\tilde{U}_1),\dots ,\exp (iu\tilde{U}_M)\right) \). Since \(\displaystyle \Big |\prod \nolimits _{s=j+1}^M\exp [iu\tilde{U}_s]\Big |\le 1\), then:

$$\begin{aligned} Q_1&= \Big |E[\exp \big [iuT(\mathbf n ,x,1)\big ]]-\prod _{\begin{array}{c} j_k=0\\ k=1,\dots ,N \end{array}}^{r_k-1} E[\exp \big [iuW(1,\mathbf n ,x,\mathbf j )\big ]]\Big |\\&= \left| E\prod _{\begin{array}{c} j_k=0\\ k=1,\dots ,N \end{array}}^{r_k-1}\exp \left[ iuW(1,\mathbf n ,x,\mathbf j )\right] -\prod _{\begin{array}{c} j_k=0\\ k=1,\dots ,N \end{array}}^{r_k-1} E\exp \big [iuW(1,\mathbf n ,x,\mathbf j )\big ]\right| \\&\le \sum _{k=1}^{M-1}\sum _{j=k+1}^M\Bigg |E\left( \exp [iu\tilde{U}_k]-1\right) \left( \exp [iu\tilde{U}_j]-1\right) \prod _{s=j+1}^M\exp [iu\tilde{U}_s]\\&\quad - E\left( \exp [iu\tilde{U}_k]-1\right) E\left( \exp [iu\tilde{U}_j]-1\right) \prod _{s=j+1}^M \exp [iu\tilde{U}_s] \Bigg |\\&= \sum _{k=1}^{M-1}\sum _{j=k+1}^M\Big |E\left( \exp [iu\tilde{U}_k]-1\right) \left( \exp [iu\tilde{U}_j]-1\right) \\&\quad -E\left( \exp [iu\tilde{U}_k]-1\right) E\left( \exp [iu\tilde{U}_j]-1\right) \Big |\times \Big |\prod _{s=j+1}^M\exp [iu\tilde{U}_s]\Big |\\&\le \sum _{k=1}^{M-1}\sum _{j=k+1}^M\Big |E\left( \exp [iu\tilde{U}_k]-1\right) \left( \exp [iu\tilde{U}_j]-1\right) \\&\quad -E\left( \exp [iu\tilde{U}_k]-1\right) E\left( \exp [iu\tilde{U}_j]-1\right) \Big |. \end{aligned}$$

Let \(\tilde{I}_j\) be the set of sites among the \(I(1,\mathbf n ,x,\mathbf j )\) such that \(\tilde{U}_j= \sum \nolimits _\mathbf{i \in \tilde{I}_{j}}D_\mathbf{i }\). The Lemma of Carbon et al. (1997) and assumption (2), lead:

$$\begin{aligned}&\Big |E\left( \exp [iu\tilde{U}_k]-1\right) \left( \exp [iu\tilde{U}_j]-1\right) -E\left( \exp [iu\tilde{U}_k]-1\right) E\left( \exp [iu\tilde{U}_j]-1\right) \Big |\\&\quad \le C\varphi \left( d(\tilde{I}_j,\tilde{I}_k)\right) p_\mathbf{n }^N. \end{aligned}$$

Then

$$\begin{aligned} Q_1&\le Cp_\mathbf{n }^N \sum _{k=1}^{M-1}\sum _{j=k+1}^M \varphi \left( d(\tilde{I}_j,\tilde{I}_k)\right) \\&\le Cp_\mathbf{n }^NM\sum _{k=2}^M \varphi \left( d(\tilde{I}_1,\tilde{I}_k)\right) \\&\le Cp_\mathbf{n }^NM\sum _{i=1}^{\infty }\sum _{k: iq_\mathbf{n }\le d(\tilde{I}_1,\tilde{I}_k)<(i+1)q_\mathbf{n }}\varphi \left( d(\tilde{I}_1,\tilde{I}_k)\right) \\&\le Cp_\mathbf{n }^NM\sum _{i=1}^{\infty }i^{N-1}\varphi (iq_\mathbf{n }) \\&\le C\widehat{\mathbf{n }}q_\mathbf{n }^{-\delta }\sum _{i=1}^{\infty }i^{N-1-\delta }, \end{aligned}$$

by (H3). This last tends to zero by the fact that \(\widehat{\mathbf{n }}q_\mathbf{n }^{-\delta }\rightarrow 0\) (see (H11)). Proof of (14): We have

$$\begin{aligned} Q_2&\equiv \widehat{\mathbf{n }}^{-1}E\left[ \sum _{i=2}^{2^N}T(\mathbf n ,x,i)\right] ^2\\&= \widehat{\mathbf{n }}^{-1}\left( \sum _{i=2}^{2^N}E\left[ T(\mathbf n ,x,i)\right] ^2+\sum _{\begin{array}{c} i,j=2,\dots ,2^N\\ i\ne j \end{array}}E\left[ T(\mathbf n ,x,i)\right] \left[ T(\mathbf n ,x,j)\right] \right) . \end{aligned}$$

By Cauchy–Schwartz inequality, we get:

$$\begin{aligned}&\forall ~2\le i\le 2^N : \widehat{\mathbf{n }}^{-1}E\left[ T(\mathbf n ,x,i)\right] \left[ T(\mathbf n ,x,j)\right] \\&\quad \le \left( \widehat{\mathbf{n }}^{-1}E\left[ T(\mathbf n ,x,i)\right] ^2\right) ^{1/2}\left( \widehat{\mathbf{n }}^{-1}E\left[ T(\mathbf n ,x,j)\right] ^2\right) ^{1/2}. \end{aligned}$$

Then, it suffices to prove that

$$\begin{aligned} \widehat{\mathbf{n }}^{-1}E\left[ T(\mathbf n ,x,i)\right] ^2\rightarrow 0~~;\forall ~ 2\le i\le 2^N. \end{aligned}$$

We will prove this for \(i=2\), the case where \(i\ne 2\) is similar. We have \(T(\mathbf n ,x,2)=\sum \nolimits _\mathbf{j \in \mathcal {J}}W(2,\mathbf n ,x,\mathbf j )= \sum \nolimits _{j=1}^M \hat{U}_j\), where we enumerate the \(W(2,\mathbf n ,x,\mathbf j )\) in the arbitrary way \(\hat{U}_1,\dots ,\hat{U}_M \). Then:

$$\begin{aligned} E\left[ T(\mathbf n ,x,2)\right] ^2&= \sum _{i=1}^M \mathrm{Var}\left( \hat{U}_i\right) +\sum _{i=1}^M\sum _{\begin{array}{c} j=1\\ i\ne j \end{array}}^M \mathrm{Cov}\left( \hat{U}_i,\hat{U}_j\right) \\&= A_1+A_2. \end{aligned}$$

The stationarity of the process \(\left( X_\mathbf{i },Y_\mathbf{i }\right) _\mathbf{i \in \mathbb {Z}^N}\), implies that:

$$\begin{aligned} \mathrm{Var}\left( \hat{U}_i\right)&= \mathrm{Var}\left( \sum _{\begin{array}{c} i_k=1\\ k=1,\dots ,N-1 \end{array}}^{p_\mathbf{n }}\sum _{i_N=1}^{q_\mathbf{n }}\Lambda _\mathbf{i }\right) ^2\nonumber \\&= p_\mathbf{n }^{N-1}q_\mathbf{n } ~ \mathrm{Var} \left[ \Lambda _\mathbf{i }\right] \nonumber \\&\quad +\sum _{\begin{array}{c} i_k=1\\ k=1,\dots ,N-1 \end{array}}^{p_\mathbf{n }}\sum _{i_N=1}^{q_\mathbf{n }} \sum _{\begin{array}{c} j_k=1\\ k=1,\dots ,N-1\\ ~~\\ \mathbf i \ne \mathbf j \end{array}}^{p_\mathbf{n }}\sum _{j_N=1}^{q_\mathbf{n }} E[\Lambda _\mathbf{i }\Lambda _\mathbf{j }]. \end{aligned}$$

We proved above that \(\mathrm{Var}\left[ \Lambda _\mathbf{i } \right] <C\) (see 11). By Lemma 8, we have:

$$\begin{aligned} \left| E[\Lambda _\mathbf{i }\Lambda _\mathbf{j }]\right| \le C(h_H\phi _x(h_K))^{-1}\varphi \left( \Vert \mathbf i -\mathbf j \Vert \right) . \end{aligned}$$
(17)

Then, we deduce that

$$\begin{aligned} \mathrm{Var}\left[ \hat{U}_i\right]&\le Cp_\mathbf{n }^{N-1}q_\mathbf{n }\left( 1+(h_H\phi _x(h_K))^{-1}\sum _{\begin{array}{c} i_k=1\\ k=1,\dots ,N-1 \end{array}}^{p_\mathbf{n }} \sum _{i_N=1}^{q_\mathbf{n }}\left( \varphi \left( \left\| \mathbf i \right\| \right) \right) \right) \nonumber \\&\le Cp_\mathbf{n }^{N-1}q_\mathbf{n } (h_H\phi _x(h_K))^{-1}\sum _{\begin{array}{c} i_k=1\\ k=1,\dots ,N-1 \end{array}}^{p_\mathbf{n }}\sum _{i_N=1}^{q_\mathbf{n }} \left( \varphi (\left\| \mathbf i \right\| )\right) . \end{aligned}$$

Consequently, we have:

$$\begin{aligned} A_1\le CMp_\mathbf{n }^{N-1}q_\mathbf{n } (h_H\phi _x(h_K))^{-1}\sum _{i=1}^\infty i^{N-1}\left( \varphi (i)\right) . \end{aligned}$$

Let

$$\begin{aligned} I\left( 2,\mathbf{n },x,\mathbf{j }\right)&= \Big \{\mathbf{i }:j_k(p_{\mathbf{n }}+q_{\mathbf{n }})+1\le i_k\le j_k(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }},~~1\le k\le N-1;\\&\quad +j_N(p_{\mathbf{n }}+q_{\mathbf{n }})+p_{\mathbf{n }}+1\le i_N\le (j_N+1)(p_{\mathbf{n }}+q_{\mathbf{n }})\Big \}. \end{aligned}$$

The  variable \(U\left( 2,\mathbf n ,x,\mathbf j \right) \) is the sum of the \(\Lambda _\mathbf{i }\) such that \(\mathbf i \) is in \(I\left( 2,\mathbf n ,x,\mathbf j \right) \). Since \(p_\mathbf{n }>q_\mathbf{n }\), if \(\mathbf i \) and \(\mathbf i' \) are, respectively, in the two different sets \(I\left( 2,\mathbf n ,x,\mathbf j \right) \) and \(I\left( 2,\mathbf n ,x,\mathbf j' \right) \); then \(i_k\ne i'_k\) for un certain \(k\) such that \(1\le k\le N\) and \(\Vert \mathbf i -\mathbf i{'} \Vert >q_\mathbf{n }\).

By using the definition of \(A_2 \), the stationarity of the process and (17), we have:

$$\begin{aligned} A_2\le \displaystyle \sum _{\begin{array}{c} j_k=1\\ k=1,\dots ,N \end{array}}^{n_k}\displaystyle \sum _{\begin{array}{c} i_k=1\\ k=1,\dots ,N\\ \Vert \mathbf i -\mathbf j \Vert >q_\mathbf{n } \end{array}}^{n_k}E[\Lambda _\mathbf{i }\Lambda _\mathbf{j }]\le C (h_H\phi _x(h_K))^{-1}\widehat{\mathbf{n }}\displaystyle \sum _{\begin{array}{c} i_k=1\\ k=1,\dots ,N\\ \Vert \mathbf i \Vert >q_\mathbf{n } \end{array}}^{n_k}\left( \varphi (\Vert \mathbf i \Vert )\right) \end{aligned}$$

and

$$\begin{aligned} A_2\le C(h_H\phi _x(h_K))^{-1}\widehat{\mathbf{n }}\sum _{i=q_\mathbf{n }}^\infty i^{N-1}\left( \varphi (i)\right) . \end{aligned}$$

We deduce that:

$$\begin{aligned} \widehat{\mathbf{n }}^{-1}E\left[ T(\mathbf n ,x,2)\right] ^2&\le CMp_\mathbf{n }^{N-1}q_\mathbf{n }\widehat{\mathbf{n }} ^{-1} (h_H\phi _x(h_K))^{-1}\sum _{i=1}^\infty i^{N-1-\delta }\nonumber \\&\quad +C(h_H\phi _x(h_K))^{-1}\sum _{i=q_\mathbf{n }}^\infty i^{N-1-\delta }. \end{aligned}$$

From \((p_\mathbf{n }+q_\mathbf{n })^{-N}p_\mathbf{n }^{N-1}q_\mathbf{n }=(p_\mathbf{n }+q_\mathbf{n })^{-N}p_\mathbf{n }^N \left( \frac{q_\mathbf{n }}{p_\mathbf{n }}\right) \le \frac{q_\mathbf{n }}{p_\mathbf{n }}\), we get:

$$\begin{aligned} CMp_\mathbf{n }^{N-1}q_\mathbf{n }\widehat{\mathbf{n }}^{-1} (h_H\phi _x(h_K))^{-1}&= \widehat{\mathbf{n }}(p_\mathbf{n }+q_\mathbf{n })^{-N}p_\mathbf{n }^{N-1}q_\mathbf{n } \widehat{\mathbf{n }}^{-1} (h_H\phi _x(h_K))^{-1}\nonumber \\&\le \left( \frac{q_\mathbf{n }}{p_\mathbf{n }}\right) (h_H\phi _x(h_K))^{-1}\nonumber \\&= q_\mathbf{n }s_\mathbf{n }\left( \widehat{\mathbf{n }}(h_H\phi _x(h_K))\right) ^{\frac{-1}{2N}}(h_H\phi _x(h_K))^{-1}\nonumber \\&= q_\mathbf{n }s_\mathbf{n }\left( \widehat{\mathbf{n }}(h_H\phi _x(h_K))^{(1+2N)}\right) ^{\frac{-1}{2N}}. \end{aligned}$$

By the definitions of \(q_\mathbf{n }\) and \(s_\mathbf{n }\), this last term converges to \(\rightarrow 0 \). Moreover, we have:

$$\begin{aligned}&C(h_H\phi _x(h_K))^{-1}\sum _{i=q_\mathbf{n }}^\infty i^{N-1-\delta }\le C(h_H\phi _x(h_K))^{-1}\int _{q_\mathbf{n }}^\infty t^{N-1-\delta }dt\\&\quad =C(h_H\phi _x(h_K))^{-1}q_\mathbf{n }^{N-\delta }. \end{aligned}$$

This last term converges to zero by (H11) and ends the proof of (14). \(\square \)

For (15): Let us use the following decomposition of small and big blocks

$$\begin{aligned} S'_\mathbf{n }=T\left( \mathbf n ,x,1\right) ,\qquad S''_\mathbf{n }=\sum _{i=2}^{2^N}T\left( \mathbf n ,x,i\right) . \end{aligned}$$

Then, we can write:

$$\begin{aligned} \widehat{\mathbf{n }}^{-1}E \left[ S'_\mathbf{n }\right] ^2=\widehat{\mathbf{n }}^{-1}E[ S_\mathbf{n }^2]+\widehat{\mathbf{n }}^{-1} E \left[ S''_\mathbf{n }\right] ^2-2\widehat{\mathbf{n }}^{-1}E [S_\mathbf{n }S''_\mathbf{n }]. \end{aligned}$$

Lemma 5 and (14) imply, respectively, that \(\widehat{\mathbf{n }}^{-1}E \left( S_\mathbf{n }\right) ^2= \widehat{\mathbf{n }}^{-1}\mathrm{Var} \left( S_\mathbf{n }\right) \rightarrow \sigma ^2(x)\) and \(\widehat{\mathbf{n }}^{-1}E \left[ S''_\mathbf{n }\right] ^2\rightarrow 0\).

Then, to show that \(\widehat{\mathbf{n }}^{-1}E \left[ S'_\mathbf{n }\right] ^2\rightarrow \sigma ^2(x)\), it suffices to remark that \(\widehat{\mathbf{n }}^{-1}E[ S_\mathbf{n }S''_\mathbf{n }]\rightarrow 0\) because, by Cauchy–Schwartz’s inequality, we can write:

$$\begin{aligned} \left| \widehat{\mathbf{n }}^{-1}E[ S_\mathbf{n }S''_\mathbf{n }]\right| \le \widehat{\mathbf{n }}^{-1}E \left| S_\mathbf{n }S''_\mathbf{n }\right| \le \left( \widehat{\mathbf{n }}^{-1}E[ S_\mathbf{n }^2]\right) ^{1/2}\left( \widehat{\mathbf{n }}^{-1}E[ {S''_\mathbf{n }}^2]\right) ^{1/2}. \end{aligned}$$

Recall that \(T\left( \mathbf n ,x,1\right) =\sum _\mathbf{j \in \mathcal {J}}W\left( 1,\mathbf n ,x,\mathbf j \right) ,\) so

$$\begin{aligned} \widehat{\mathbf{n }}^{-1}E \left( S'_\mathbf{n }\right) ^2&= \widehat{\mathbf{n }}^{-1}\sum _{\begin{array}{c} j_k=0\\ k=1,\dots ,N \end{array}}^{r_k-1}E\left[ W\left( 1,\mathbf n ,x,\mathbf j \right) \right] ^2\\&\quad +\widehat{\mathbf{n }}^{-1} \times \displaystyle \sum _{\begin{array}{c} j_k=0\\ k=1,\dots ,N \end{array}}^{r_k-1}\displaystyle \sum _{\begin{array}{c} i_k=0\\ k=1,\dots ,N\\ i_k\ne j_k~\mathrm {for some k} \end{array}}^{r_k-1}\mathrm{Cov}(W(1,\mathbf n ,x,\mathbf j ),W(1,\mathbf n ,x,\mathbf i )) \end{aligned}$$

By similar arguments used above for \(A_2\), this last term is not greater than

$$\begin{aligned}&C(h_H\phi _x(h_K))^{-1}\displaystyle \sum _{\begin{array}{c} i_k=1\\ k=1,\dots ,N\\ \Vert \mathbf i \Vert >q_\mathbf{n } \end{array}}^{r_k-1 }\left( \varphi (\Vert \mathbf i \Vert )\right) \nonumber \\&\qquad \le C(h_H\phi _x(h_K))^{-1}\sum _{i=q_\mathbf{n }}^\infty i^{N-1}\left( \varphi (i)\right) \le C(h_H\phi _x(h_K))^{-1}q_\mathbf{n }^{N-\delta }\rightarrow 0. \end{aligned}$$

So \(Q_3\rightarrow \sigma ^2(x)\). This ends the proof. Proof of 16: Since \(\left| \Lambda _\mathbf{i }\right| \le C(h_H\phi _x(h_K))^{-1/2}\), we have: \(\left| W\left( 1,\mathbf n ,x,\mathbf j \right) \right| \le Cp_\mathbf{n }^N(h_H\phi _x(h_K))^{-1/2}\). Then we deduce that

$$\begin{aligned} Q_4\le Cp_\mathbf{n }^{2N}(h_H\phi _x(h_K))^{-1}\widehat{\mathbf{n }}^{-1}\sum _{\begin{array}{c} j_k=0\\ k=1,\dots ,N \end{array}}^{r_k-1}P\left[ \left| W\left( 1,\mathbf n ,x,\mathbf j \right) \right| >\epsilon \left( \sigma ^2(x)\widehat{\mathbf{n }}\right) ^{1/2}\right] . \end{aligned}$$

We have \(\left| W\left( 1,\mathbf n ,x,\mathbf j \right) \right| /\left( \left( \sigma ^2(x)\widehat{\mathbf{n }}\right) ^{1/2}\right) \le Cp_\mathbf{n }^N\left( \widehat{\mathbf{n }}h_H\phi _x(h_K) \right) ^{-1/2}=C\left( s_\mathbf{n }\right) ^{-N}\rightarrow 0\), because \(p_\mathbf{n }=\left[ \left( \widehat{\mathbf{n }}h_H\phi _x(h_K)\right) ^{1/(2N)}/s_\mathbf{n }\right] \) and \(s_\mathbf{n }\rightarrow \infty \).

So, for all \(\epsilon \) and \(\mathbf j \in \mathcal {J}\); if \(\widehat{\mathbf{n }}\) is great enough, then \(P\left[ W\left( 1,\mathbf n ,x,\mathbf j \right) >\epsilon \left( \sigma ^2(x)\widehat{\mathbf{n }}\right) ^{1/2}\right] =0\). Then \(Q_4=0\) for \(\widehat{\mathbf{n }}\) great enough. This yields the proof.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dabo-Niang, S., Kaid, Z. & Laksaci, A. Asymptotic properties of the kernel estimate of spatial conditional mode when the regressor is functional. AStA Adv Stat Anal 99, 131–160 (2015). https://doi.org/10.1007/s10182-014-0233-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10182-014-0233-5

Keywords

Mathematics Subject Classification

Navigation