Skip to main content
Log in

A new estimation in functional linear concurrent model with covariate dependent and noise contamination

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

Functional linear concurrent regression model is an important model in functional regression. It is usually assumed that realizations of functional covariate are independent and observed precisely. But in practice, the dependence across different functional sample curves often exists. Moreover, each realization of functional covariate may be contaminated with noise. To address this issue, we propose a novel estimation method, which makes full use of dependence information and filters out the impact of measured noise. Then, we extend the proposed method to partially observed functional data. Under some regular conditions, we establish asymptotic properties of the estimators of the model. Finite-sample performance of our estimation is illustrated by Monte Carlo simulation studies and a real data example. Numerical results reveal that the proposed method exhibits superior performance compared with the existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Bathia N, Yao Q, Ziegelmann F (2010) Identifying the finite dimensionality of curve time series. Ann Stat 38(6):3352–3386

    Article  MathSciNet  MATH  Google Scholar 

  • Bosq D (2012) Linear processes in function spaces: theory and applications, vol 149. Springer Science and Business Media

  • Bücher A, Dette H, Heinrichs F (2019) Detecting deviations from second-order stationarity in locally stationary functional time series. Annals of the Institute of Statistical Mathematics pp 1–40

  • Cerqueira V, Torgo L, Mozetič I (2020) Evaluating time series forecasting models: an empirical study on performance estimation methods. Mach Learn 109(11):1997–2028

    Article  MathSciNet  MATH  Google Scholar 

  • Chang J, Chen C, Qiao X, Yao Q (2022) An autocovariance-based learning framework for high-dimensional functional time series, arXiv:2008.12885v3

  • Chen C, Guo S, Qiao X (2022) Functional linear regression: dependence and error contamination. J Bus Econ Stat 40(1):444–457

    Article  MathSciNet  Google Scholar 

  • Chen K, Zhang X, Petersen A, Müller H-G (2017) Quantifying infinite-dimensional data: functional data analysis in action. Stat Biosci 9(2):582–604

    Article  Google Scholar 

  • Davydov YA (1968) Convergence of distributions generated by stationary stochastic processes. Theory Probab Appl 13(4):691–696

    Article  MATH  Google Scholar 

  • Delsol L (2009) Advances on asymptotic normality in non-parametric functional time series analysis. Statistics 43(1):13–33

    Article  MathSciNet  MATH  Google Scholar 

  • Delsol L (2010) Nonparametric methods for \(\alpha \)-mixing functional random variables. The Oxford handbook of functional data analysis

  • DeVore RA, Lorentz GG (1993) Constructive approximation, vol 303. Springer Science and Business Media

  • Eggermont P, Eubank R, LaRiccia V (2010) Convergence rates for smoothing spline estimators in varying coefficient models. J Stat Plan Inference 140(2):369–381

    Article  MathSciNet  MATH  Google Scholar 

  • Ferraty F (2011) Recent advances in functional data analysis and related topics. Springer Science & Business Media

  • Ferraty F, Vieu P (2006) Nonparametric functional data analysis: theory and practice. Springer Science & Business Media

  • Gelfand AE, Kim H-J, Sirmans C, Banerjee S (2003) Spatial modeling with spatially varying coefficient processes. J Am Stat Assoc 98(462):387–396

    Article  MathSciNet  MATH  Google Scholar 

  • Hall P, Müller H-G, Wang J-L (2006) Properties of principal component methods for functional and longitudinal data analysis. Ann Stat 34(3):1493–1517

    Article  MathSciNet  MATH  Google Scholar 

  • Horváth L, Hušková M, Rice G (2013) Test of independence for functional data. J Multivar Anal 117:100–119

    Article  MathSciNet  MATH  Google Scholar 

  • Horváth L, Kokoszka P (2012) Inference for functional data with applications, vol 200. Springer Science and Business Media

  • Hsing T, Eubank RL (2015) Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators. Wiley, New York

    Book  MATH  Google Scholar 

  • Huang JZ, Wu CO, Zhou L (2002) Varying-coefficient models and basis function approximations for the analysis of repeated measurements. Biometrika 89(1):111–128

    Article  MathSciNet  MATH  Google Scholar 

  • Huang JZ, Wu CO, Zhou L (2004) Polynomial spline estimation and inference for varying coefficient models with longitudinal data. Statistica Sinica pp 763–788

  • Manrique T, Crambes C, Hilgert N et al (2018) Ridge regression for the functional concurrent model. Electron J Stat 12(1):985–1018

    Article  MathSciNet  MATH  Google Scholar 

  • Morris JS (2015) Functional regression. Ann Rev Stat Appl 2(1):321–359

    Article  Google Scholar 

  • Qu A, Li R (2006) Quadratic inference functions for varying-coefficient models with longitudinal data. Biometrics 62(2):379–391

    Article  MathSciNet  MATH  Google Scholar 

  • Ramsay JO, Silverman BW (2005) Functional data analysis. Springer Science and Business Media

  • Reiss PT, Goldsmith J, Shang HL, Ogden RT (2017) Methods for scalar-on-function regression. Int Stat Rev 85(2):228–249

    Article  MathSciNet  Google Scholar 

  • Schumaker L (2007) Spline Functions: Basic Theory. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Şentürk D, Müller H-G (2010) Functional varying coefficient models for longitudinal data. J Am Stat Assoc 105(491):1256–1264

    Article  MathSciNet  MATH  Google Scholar 

  • Şentürk D, Nguyen DV (2011) Varying coefficient models for sparse noise-contaminated longitudinal data. Stat Sin 21(4):1831

    Article  MathSciNet  MATH  Google Scholar 

  • Stone CJ (1982) Optimal global rates of convergence for nonparametric regression. Ann Stat, pp 1040–1053

  • Wang J-L, Chiou J-M, Müller H-G (2016) Functional data analysis. Ann Rev Stat Appl 3:257–295

    Article  Google Scholar 

  • Wu CO, Chiang C-T, Hoover DR (1998) Asymptotic confidence regions for kernel smoothing of a varying-coefficient model with longitudinal data. J Am Stat Assoc 93(444):1388–1402

    Article  MathSciNet  MATH  Google Scholar 

  • Yang J, Deng X, Liu Q, Ding R (2020) Temperature error-correction method for surface air temperature data. Meteorol Appl 27(6):e1972

    Article  Google Scholar 

  • Zhang J-T, Chen J (2007) Statistical inferences for functional data. Ann Stat 35(3):1052–1079

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang X, Wang J-L (2016) From sparse to dense functional data and beyond. Ann Stat 44(5):2281–2321

    Article  MathSciNet  MATH  Google Scholar 

  • Zhu T, Politis DN et al (2017) Kernel estimates of nonparametric functional autoregression models and their bootstrap approximation. Electron J Stat 11(2):2876–2906

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hui Ding.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Ding’s research was partially supported by the National Natural Science Foundation of China (11901286). Yao’s research was partially supported by the National Natural Science Foundation of China (11901149). Zhang’s research was partially supported by the National Natural Science Foundation of China (11971171, 11831008, 12171310), and the Basic Research Project of Shanghai Science and Technology Commission (22JC1400800).

Appendix

Appendix

In this section, we give the proof of Theorems 1 and 2. The proof of theorems will require some Lemmas. We first list several lemmas that are necessary to prove the theorems.

Lemma 1

Let \(B_1(t),\dots ,B_{K}(t)\) be a set of B-spline basis function defined on compact interval \({\mathcal {T}}\). For any spline function \(\sum _{k=1}^{K}b_kB_K(t)\), there exist two positive constants \(M_1\) and \(M_2\) satisfying that

$$\begin{aligned} \frac{M_1\Vert \varvec{b}\Vert ^2}{K} \le \int _{{\mathcal {T}}} \left[ \sum _{k=1}^{K}b_kB_K(t)\right] ^2 \mathrm{{d}}t \le \frac{M_2\Vert \varvec{b}\Vert ^2}{K}, \end{aligned}$$

where \(\Vert \varvec{b}\Vert ^2=\sum _{k=1}^{K}b_k^2\).

Proof

This lemma comes from Theorem 4.2 of Chapter 5 of DeVore and Lorentz (1993). \(\square \)

Lemma 2

Let \({\mathcal {L}}_s({\mathcal {F}})\) denote the class of \({\mathcal {F}}\)-measurable random process X(t) satisfying \(\Vert X\Vert _{s}=\left( \int (\textrm{E}|X(t)|^s\mathrm{{d}}t)\right) ^{1/s}<\infty \). Assume X(t) and Y(t) are two random processes such that \(X(t)\in {\mathcal {L}}_s(\sigma (X(t)))\) and \(Y(t)\in {\mathcal {L}}_s(\sigma (Y(t)))\). Let pq and r be three positive numbers such that \(1/p+1/q+1/r=1\). Then, there exist a constant C such that

$$\begin{aligned} \int |\textrm{Cov}(X(t),Y(t))|\mathrm{{d}}t \le C \Vert X\Vert _{p}\Vert Y\Vert _{q}\alpha \left( \sigma (X(t)),\sigma (Y(t))\right) ^{1/r}, \end{aligned}$$

where \(\alpha \left( \sigma (X(t)),\sigma (Y(t))\right) \) denotes \(\alpha \)-mixing coefficients between two \(\sigma (X(t))\) and \(\sigma (Y(t))\).

Proof

This lemma comes from Lemma 2.1 of Davydov (1968). \(\square \)

Lemma 3

Suppose that Conditions C1–C2 hold, then we have

$$\begin{aligned} \textrm{E}\left\| \frac{1}{\sqrt{n-m}}\sum _{i=1}^{n-m}W_i^c\right\| ^2<\infty , \end{aligned}$$

Proof

This lemma comes from Theorem 2.17 of Bosq (2012). \(\square \)

Lemma 4

Suppose that Conditions C1–C2 hold, then for any \(m=1,\dots ,M\), it holds that

$$\begin{aligned} \textrm{E}\left\| \frac{1}{\sqrt{n-m}}\sum _{i=1}^{n-m}\left[ W_i^cW_{i+m}^c-\textrm{E}\left( W_i^cW_{i+m}^c\right) \right] \right\| ^2<\infty . \end{aligned}$$

Proof

Let

$$\begin{aligned} A_k(t)=\frac{1}{\sqrt{n-m}}\sum _{i=1}^{n-m}\left[ W_i^c(t)W_{i+m}^c(t)-\textrm{E}\left( W_i^c(t)W_{i+m}^c(t)\right) \right] , m=1,\dots ,M. \end{aligned}$$

Then,

$$\begin{aligned} \textrm{E}\Vert A_k\Vert ^2= & {} \frac{\sum _{i=1}^{n-m}\textrm{E}\left\| W_i^cW_{i+m}^c\right\| ^2+2\sum _{1\le i<j \le n-m}\int \textrm{Cov}(W_i^cW_{i+m}^c,W_j^cW_{j+m}^c)}{n-m} \nonumber \\= & {} \frac{B_1+B_2}{n-m} \end{aligned}$$
(15)

Using Hölder inequality, we deduce that

$$\begin{aligned} B_1 \le \sum _{i=1}^{n-m} \textrm{E}\left\| (W_i^c)^2\right\| \textrm{E}\left\| (W_{i+m}^c)^2\right\| \le \sum _{i=1}^{n-m} \textrm{E}\Vert (W_i^c)^2\Vert ^2 \end{aligned}$$

By Condition C2, there exist a constant \(C_1>0\) such that

$$\begin{aligned} B_1 \le C_1(n-m). \end{aligned}$$
(16)

Let

$$\begin{aligned} B_2= & {} 2\sum _{i=1}^{n-m}\sum _{j=i+1}^{i+m} \int \textrm{Cov}(W_i^cW_{i+m}^c,W_j^cW_{j+m}^c)\nonumber \\{} & {} + 2\sum _{i=1}^{n-m}\sum _{j=i+m+1}^{n-m} \int \textrm{Cov}(W_i^cW_{i+m}^c,W_j^cW_{j+m}^c) \nonumber \\= & {} B_{21}+B_{22}. \end{aligned}$$
(17)

Note that

$$\begin{aligned} B_{21}\le & {} 2\sum _{i=1}^{n-m}\sum _{j=i+1}^{i+m} \left( \int \textrm{E}|W_i^cW_{i+m}^cW_j^cW_{j+m}^c| +\int |\textrm{E}(W_i^cW_{i+m}^c)\textrm{E}(W_j^cW_{j+m}^c)|\right) \nonumber \\\le & {} 2\sum _{i=1}^{n-m}\sum _{j=i+1}^{i+m}\left( \textrm{E}\int (W_i^c)^4+\int \left( \textrm{E}(W_i^c)^2\right) ^2\right) \le 4C_1(n-m)m. \end{aligned}$$
(18)

By Lemma 2 and Condition C1, there exist constants \(C_2,C_3>0\) such that

$$\begin{aligned} B_{22}\le & {} \sum _{i=1}^{n-m}\sum _{j=i+m+1}^{n-m} 2C \Vert W_i^cW_{i+m}^c\Vert _{4}\Vert W_j^cW_{j+m}^c\Vert _{4}\alpha _{j-i-m}^{1/2} \nonumber \\\le & {} \sum _{i=1}^{n-m}\sum _{j=i+m+1}^{n-m} 2C \int (\textrm{E}(W_i^c)^8)^{1/2}\alpha _{j-i-m}^{1/2} \nonumber \\\le & {} \sum _{i=1}^{n-m}\sum _{k=1}^{\infty } C_2 \alpha _{k}^{1/2} < C_3(n-m) \end{aligned}$$
(19)

Combining (15)–(19), there exist a constant \(C_4>0\) such that

$$\begin{aligned} \textrm{E}\Vert A_k(t)\Vert ^2\le C_1+4C_1m+C_3=C_4<\infty . \end{aligned}$$

The proof of Lemma 4 is completed. \(\square \)

Lemma 5

Suppose that Conditions C1–C3 hold, then there exist two positive constants \(c_3\) and \(c_4\) such that, except on an event whose probability tends to zero, all the eigenvalues of \(K\int _{T_0}^{T_1}{\hat{\varvec{G}}}(t)^T{\hat{\varvec{G}}}(t)\mathrm{{d}}t\) fall between \(c_3\) and \(c_4\).

Proof

Let \(K\int _{T_0}^{T_1}{\hat{\varvec{G}}}(t)^T{\hat{\varvec{G}}}(t)\mathrm{{d}}t=\varvec{D}\). Let \(\lambda _{\max }\) and \(\lambda _{\min }\) be the maximum and minimum eigenvalues of \(\varvec{D}\) respectively. Then, we deduce that

$$\begin{aligned} \lambda _{\max }=\max _{\Vert \theta \Vert =1}\theta ^T \varvec{D}\theta ,\quad \lambda _{\min }=\min _{\Vert \theta \Vert =1}\theta ^T \varvec{D}\theta . \end{aligned}$$
(20)

Let

$$\begin{aligned} \theta ^T \varvec{D}\theta =\theta ^T (\mathrm{E\varvec{D}}) \theta +\left( \theta ^T \varvec{D}\theta -\theta ^T(\mathrm{E\varvec{D}})\theta \right) . \end{aligned}$$
(21)

In the first term, according to Lemma 3, we have

$$\begin{aligned} \theta ^T (\mathrm{E\varvec{D}}) \theta= & {} K\sum _{k=1}^{K}\theta _{k}\sum _{l=1}^{K}\theta _{l}\sum _{m=1}^{M}\left( \int _{T_0}^{T_1}(\textrm{E}W_i^cW_{i+m}^c)^2 B_k(t)B_l(t)\mathrm{{d}}t\right) \\= & {} K\sum _{k=1}^{K}\theta _{k}\sum _{l=1}^{K}\theta _{l}\sum _{m=1}^{M}\left( \int _{T_0}^{T_1}(\gamma _m(t))^2 B_k(t)B_l(t)\mathrm{{d}}t\right) +o_p(1). \end{aligned}$$

By Condition C3 and Lemma 1, we deduce that

$$\begin{aligned} c_3|\theta \Vert ^2\le \theta ^T (\mathrm{E\varvec{D}}) \theta \le c_4|\theta \Vert ^2. \end{aligned}$$
(22)

For the second term, By Lemma 4, we have

$$\begin{aligned} \theta ^T \varvec{D}\theta -\theta ^T(\mathrm{E\varvec{D}})\theta= & {} K\sum _{k=1}^{K}\theta _{k}\sum _{l=1}^{K}\theta _{l}\sum _{m=1}^{M}\left( \int _{T_0}^{T_1}\left[ \left( \sum _{i=1}^{n-m}\frac{W_i^cW_{i+m}^c}{n-m}\right) ^2-(\textrm{E}W_i^cW_{i+m}^c)^2\right] B_kB_l\right) \nonumber \\= & {} o_p(1). \end{aligned}$$
(23)

Lemma 5 follows from Eqs. (20)–(23). \(\square \)

Proof of Theorem 1

In this proof, let C denote some positive constant not depending on n, but which may assume different values at each appearance. First, By Corollary 6.21 of Schumaker (2007), there exist a constant C such that

$$\begin{aligned} \beta _1(t)=\sum _{k=1}^{K}{\tilde{b}}_kB_k(t)+R(t) \end{aligned}$$

with

$$\begin{aligned} \sup _{t \in [T_0,T_1]} |R(t)| \le C K^{-r}. \end{aligned}$$
(24)

Let \({\tilde{\varvec{b}}}=({\tilde{b}}_1,\dots ,{\tilde{b}}_{K})^T\) and \({\tilde{\beta }}_1(t)=\sum _{k=1}^{K}{\tilde{b}}_kB_k(t)\). Thus,

$$\begin{aligned} \Vert {\hat{\beta }}_1-\beta _1\Vert ^2 \le 2\Vert {\hat{\beta }}_1-{\tilde{\beta }}_1\Vert ^2+ 2\Vert {\tilde{\beta }}_1-\beta _1\Vert ^2 \le 2\Vert {\hat{\beta }}_1-{\tilde{\beta }}_1\Vert ^2+C K^{-2r} \end{aligned}$$
(25)

Let \(K\int _{T_0}^{T_1}{\hat{\varvec{G}}}(t)^T{\hat{\varvec{G}}}(t)\mathrm{{d}}t=\varvec{D}\). By Lemmas 15 and Cauchy–Schwarz Inequality, we get

$$\begin{aligned} \Vert {\hat{\beta }}_1-{\tilde{\beta }}_1\Vert ^2= & {} \left\| \varvec{B}^T\left[ \left( \int _{T_0}^{T_1}{\hat{\varvec{G}}}(t)^T{\hat{\varvec{G}}}(t)\mathrm{{d}}t\right) ^{-1}\int _{T_0}^{T_1} {\hat{\varvec{G}}}(t)^T{\hat{\varvec{H}}}(t)\mathrm{{d}}t-{\tilde{\varvec{b}}}\right] \right\| ^2 \nonumber \\\le & {} C\frac{1}{K}\left\| \left( \int _{T_0}^{T_1}{\hat{\varvec{G}}}(t)^T{\hat{\varvec{G}}}(t)\mathrm{{d}}t\right) ^{-1}\int _{T_0}^{T_1} {\hat{\varvec{G}}}(t)^T\left( {\hat{\varvec{H}}}(t)-{\hat{\varvec{G}}}(t){\tilde{\varvec{b}}}\right) \mathrm{{d}}t\right\| ^2 \nonumber \\\le & {} CK\left\| \int _{T_0}^{T_1}{\hat{\varvec{G}}}(t)^T\left( {\hat{\varvec{H}}}(t)-{\hat{\varvec{G}}}(t){\tilde{\varvec{b}}}\right) \mathrm{{d}}t\right\| ^2 \nonumber \\\le & {} CK\sum _{m=1}^{M}\left[ \int _{T_0}^{T_1}\sum _{k=1}^{K}\sum _{i=1}^{n-m}\frac{B_k W_i^cW_{i+m}^c}{(n-m)^2}\sum _{j=1}^{n-m} \left( Y_j^cW_{j+m}^c-W_j^cW_{j+m}^c{\tilde{\beta }}_1\right) \right] ^2\nonumber \\\le & {} \frac{CK}{n-M}\sum _{m=1}^{M}AE, \end{aligned}$$
(26)

where

$$\begin{aligned} A=\int _{T_0}^{T_1}\left( \sum _{k=1}^{K}B_k\sum _{i=1}^{n-m}\frac{ W_i^cW_{i+m}^c}{n-m}\right) ^2, E=\int _{T_0}^{T_1}\left[ \sum _{j=1}^{n-m}\frac{Y_j^cW_{j+m}^c-W_j^cW_{j+m}^c{\tilde{\beta }}_1}{\sqrt{n-m}}\right] ^2. \end{aligned}$$

By Condition C3, Lemmas 1 and 4, we deduce that

$$\begin{aligned} \textrm{E}(A)\le & {} 2\int _{T_0}^{T_1}\left( \sum _{k=1}^{K}B_k\right) ^2\nonumber \\ {}{} & {} \textrm{E}\left[ \left( \frac{1}{n-m}\sum _{i=1}^{n-m}\frac{W_i^cW_{i+m}^c- \textrm{E}\left( W_i^c(t)W_{i+m}^c(t)\right) }{\sqrt{n-m}}\right) ^2+\left( \textrm{E}W_i^c(t)W_{i+m}^c(t)\right) ^2\right] \nonumber \\\le & {} 2C \int _{T_0}^{T_1}\left( \sum _{k=1}^{K}B_k\right) ^2 \le C \end{aligned}$$
(27)

Let

$$\begin{aligned} E_1=\textrm{E}\left\| \sum _{j=1}^{n-m}\frac{\left( Y_j^cW_{j+m}^c-\textrm{E}Y_j^cW_{j+m}^c\right) }{\sqrt{n-m}}\right\| ^2, E_2=\textrm{E}\left\| \sum _{j=1}^{n-m}\frac{\left( \textrm{E}Y_j^cW_{j+m}^c-\textrm{E}W_j^cW_{j+m}^c\beta _1\right) }{\sqrt{n-m}}\right\| ^2,\\ E_3=\textrm{E}\left\| \sum _{j=1}^{n-m}\frac{\left( \textrm{E}W_j^cW_{j+m}^c(\beta _1-{\tilde{\beta }}_1)\right) }{\sqrt{n-m}}\right\| ^2, E_4=\textrm{E}\left\| \sum _{j=1}^{n-m}\frac{\left( W_j^cW_{j+m}^c-\textrm{E}W_j^cW_{j+m}^c\right) {\tilde{\beta }}_1}{\sqrt{n-m}}\right\| ^2. \end{aligned}$$

Note that

$$\begin{aligned} E_1\le & {} 3\textrm{E}\left\| \sum _{j=1}^{n-m}\frac{\varepsilon _jW_{j+m}^c}{\sqrt{n-m}}\right\| ^2 + 3\textrm{E}\left\| \sum _{j=1}^{n-m}\frac{e_j^cW_{j+m}^c}{\sqrt{n-m}}\right\| ^2\nonumber \\{} & {} + 3\textrm{E}\left\| \sum _{j=1}^{n-m}\frac{\left( W_j^cW_{j+m}^c-\textrm{E}W_j^cW_{j+m}^c\right) \beta _1}{\sqrt{n-m}}\right\| ^2\nonumber \\= & {} 3E_{11}+3E_{12}+3E_{13}. \end{aligned}$$
(28)

By Conditions C4, we deduce that there exists a constant \(C_0\) such that \(\sup _{t \in [T_0,T_1]}| \beta _1(t)|<C_0\). Hence, using Lemma 4 and Conditions C1–C2, we have

$$\begin{aligned} E_{11}\le \frac{3}{n-m}\sum _{j=1}^{n-m}\textrm{E}\left\| \varepsilon _j^2\right\| \textrm{E}\left\| (W_{j+m}^c)^2\right\| <C, \end{aligned}$$
(29)
$$\begin{aligned} E_{12}\le \frac{3}{n-m}\sum _{j=1}^{n-m}\textrm{E}\left\| (e_j^c)^2\right\| \textrm{E}\left\| (W_{j+m}^c)^2\right\| <C \end{aligned}$$
(30)

and

$$\begin{aligned} E_{13}\le \frac{3C_0}{n-m}\sum _{j=1}^{n-m}\Vert \left( W_j^cW_{j+m}^c-\textrm{E}W_j^cW_{j+m}^c\right) \Vert ^2<C. \end{aligned}$$
(31)

Combining Eqs. (28)–(31),we deduce that

$$\begin{aligned} E_1\le C. \end{aligned}$$
(32)

Seen from Eq. (4), we have

$$\begin{aligned} E_2=0. \end{aligned}$$
(33)

By using Lemma 5 and Eq. (24), we get

$$\begin{aligned} E_3\le \sup _{t \in [T_0,T_1]} |R(t)|^2 \textrm{E}\left\| W_i^cW_{i+m}^c\right\| ^2 \le C K^{-2r}\le C. \end{aligned}$$
(34)
$$\begin{aligned} E_4\le \sup _{t \in [T_0,T_1]} |{\tilde{\beta }}_1(t)|^2 \textrm{E}\left\| \frac{1}{\sqrt{n-m}}\sum _{i=1}^{n-m}\left[ W_i^cW_{i+m}^c-\textrm{E}\left( W_i^cW_{i+m}^c\right) \right] \right\| ^2 \le C. \end{aligned}$$
(35)

Combining Eqs. (32)–(35), we deduce that

$$\begin{aligned} \textrm{E}(E)\le 4E_1+4E_2+4E_3+4E_4\le C. \end{aligned}$$
(36)

Hence, by combining Eqs. (26), (27) and (37), we have

$$\begin{aligned} \Vert {\hat{\beta }}_1-{\tilde{\beta }}_1\Vert ^2 =O_p\left( n^{-1}K\right) . \end{aligned}$$
(37)

Thus, Eq. (13) follows from Eqs. (25) and (37). This completes the proof of Theorem 1. \(\square \)

Proof of Theorem 2

In this proof, let C denote some positive constant not depending on n, but which may assume different values at each appearance. By using Theorem 2.17 of Bosq (2012), we have

$$\begin{aligned} \textrm{E}\left\| \frac{1}{\sqrt{n}}\sum _{i=1}^{n}\left( W_i-\textrm{E}W\right) \right\| ^2<C. \end{aligned}$$

According to condition C2, we deduce that

$$\begin{aligned} \sup _{t \in [T_0,T_1]}\left( \frac{1}{n}\sum _{i=1}^{n}W_i(t)\right) ^2\le & {} \frac{2}{n}\sup _{t \in [T_0,T_1]}\left( \sum _{i=1}^{n}\frac{W_i(t) -\textrm{E}W(t)}{\sqrt{n}}\right) ^2\nonumber \\{} & {} +2\sup _{t \in [T_0,T_1]}\left( \textrm{E}W(t)\right) ^2=O_p(1). \end{aligned}$$
(38)

Since \(e_i(t)(i=1,\dots ,n)\) and \(\varepsilon _i(t)(i=1,\dots ,n)\) are independent identically distribution zero mean random process, we have

$$\begin{aligned} \textrm{E}\left\| \frac{1}{\sqrt{n}}\sum _{i=1}^{n}e_i\right\| ^2<C. \end{aligned}$$
(39)

and

$$\begin{aligned} \textrm{E}\left\| \frac{1}{\sqrt{n}}\sum _{i=1}^{n}\varepsilon _i\right\| ^2<C. \end{aligned}$$
(40)

According to condition C4 and Eq. (39), we have

$$\begin{aligned} \left\| \beta _1\frac{1}{n}\sum _{i=1}^{n}e_i\right\| ^2 \le \sup _{t \in [T_0,T_1]}|\beta _1(t)|^2\Vert \frac{1}{n}\sum _{i=1}^{n}e_i\Vert ^2=O_p\left( n^{-1}\right) . \end{aligned}$$
(41)

Combining Eqs. (1), (12), (38), (40), (41) and Theorem 1, we have

$$\begin{aligned} \Vert {\hat{\beta }}_0-\beta _0\Vert ^2\le & {} 3\left\| \frac{1}{n}\sum _{i=1}^{n}W_i\left( {\hat{\beta }}_1-\beta _1\right) \right\| ^2+ 3\left\| \beta _1\frac{1}{n}\sum _{i=1}^{n}e_i\right\| ^2 +3\left\| \frac{1}{n}\sum _{i=1}^{n}\varepsilon _i\right\| ^2\\\le & {} 3\sup _{t \in [T_0,T_1]}\left( \frac{1}{n}\sum _{i=1}^{n}W_i(t)\right) ^2\Vert {\hat{\beta }}_1-\beta _1\Vert ^2+O_p\left( n^{-1}\right) +O_p\left( n^{-1}\right) \\= & {} O_p\left( n^{-1}K+K^{-2r}\right) \end{aligned}$$

This completes the proof of Theorem 2. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ding, H., Yao, M. & Zhang, R. A new estimation in functional linear concurrent model with covariate dependent and noise contamination. Metrika 86, 965–989 (2023). https://doi.org/10.1007/s00184-023-00900-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-023-00900-w

Keywords

Mathematical Subject Classifications

Navigation