Skip to main content
Log in

Polynomial spline estimation for partial functional linear regression models

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

Because of its orthogonality, interpretability and best representation, functional principal component analysis approach has been extensively used to estimate the slope function in the functional linear model. However, as a very popular smooth technique in nonparametric/semiparametric regression, polynomial spline method has received little attention in the functional data case. In this paper, we propose the polynomial spline method to estimate a partial functional linear model. Some asymptotic results are established, including asymptotic normality for the parameter vector and the global rate of convergence for the slope function. Finally, we evaluate the performance of our estimation method by some simulation studies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

Download references

Acknowledgments

The work was supported by National Nature Science Foundation of China (Grant Nos. 10961026, 11171293, 11225103, 11301464), the PH.D. Special Scientific Research Foundation of Chinese University (20115301110004), the Key Fund of Yunnan Province (Grant No. 2010CC003) and the Scientific Research Foundation of Yunnan Provincial Department of Education (No. 2013Y360). We are grateful to the referees and the editors for their constructive remarks that greatly improved the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianjun Zhou.

Appendix

Appendix

In the appendix, we give the proofs of the theorems and corollary in Sect. 3.

Set \(B_{s}={K_n}^{1/2}N_{s}^{b}, s=1,\ldots , K_n\), where \(N_{s}^{b}\) are the normalized B-splines. From the Theorem 4.2 of Chapter 5 of DeVore and Lorentz (1993), we have that for any spline function \(\sum _{s=1}^{K_n} b_s B_{s}\), there are positive constants \(M_1\) and \(M_2\) such that

$$\begin{aligned} M_1\Vert b\Vert _2^2\le \int \left\{ \sum _{s=1}^{K_n} b_s B_{s}\right\} ^2\le M_2\Vert b\Vert _2^2, \end{aligned}$$
(7)

where \(\Vert \cdot \Vert _2\) is Euclidean norm. Let \(\Vert r\Vert _{\infty }=\sup \nolimits _{x\in [0,1]} |r(x)|\).

In order to prove the theorems, we need the following two lemmas.

Lemma 1

If conditions (C1) and (C2) hold, then we have

  1. (i)
    $$\begin{aligned} \sup _{a\in S_{k,N_n}}\Big |\frac{\frac{1}{n}\sum _{i=1}^n \langle X_i,a \rangle ^2}{E \langle X,a \rangle ^2}-1\Big |=o_p(1). \end{aligned}$$
  2. (ii)

    there exists an interval \([M_3,M_4],0<M_3<M_4<\infty \) such that as \(n\rightarrow \infty \),

    $$\begin{aligned} P\Big \{\text {all the eigenvalues of}~\frac{1}{n}B^TB~\text {fall in}~[M_3,M_4]\Big \}\rightarrow 1. \end{aligned}$$

Note that the Lemma 1 is a generalization of Lemma 1 and 2 in Huang and Shen (2004b) in functional data case. We give a brief proof in the following.

Proof

(i) Let \(\Gamma _n\) denote the empirical versions of operator \(\Gamma \), that is,

$$\begin{aligned} \Gamma _n x(t)=\frac{1}{n}\sum _{i=1}^n \langle X_i,x \rangle X_i(t),\quad x\in H,t\in [0,1]. \end{aligned}$$

By the Cauchy–Schwarz inequality, condition (C2) and (28) in Cardot et al. (2003), we have

$$\begin{aligned} \Big |\frac{\frac{1}{n}\sum _{i=1}^n \langle X_i,a \rangle ^2}{E \langle X,a \rangle ^2}-1\Big |= & {} \Big |\frac{\langle (\Gamma _n-\Gamma )a,a\rangle }{\langle \Gamma a,a\rangle }\Big |\\\le & {} \frac{\Vert \Gamma _n-\Gamma \Vert _{\infty }\Vert a\Vert ^2}{C\Vert a\Vert ^2}\\= & {} \frac{\Vert \Gamma _n-\Gamma \Vert _{\infty }}{C}. \end{aligned}$$

Then for an arbitrary constant \(\epsilon >0\), by Lemma 5.2 in Cardot et al. (1999), we have

$$\begin{aligned} P\left\{ \sup _{a\in S_{k,N_n}}\Big |\frac{\frac{1}{n}\sum _{i=1}^n\langle X_i,a\rangle ^2}{E\langle X,a\rangle ^2}-1\Big |>\epsilon \right\}\le & {} P\Big \{ \Vert \Gamma _n-\Gamma \Vert _{\infty }>C\epsilon \Big \}\\\le & {} \frac{E\Vert \Gamma _n-\Gamma \Vert _{\infty }^2}{C^2\epsilon ^2}\\\le & {} \frac{E\Vert X\Vert ^4}{nC^2\epsilon ^2}, \end{aligned}$$

together with (C2), which gives the result.

(ii) Let \(b=(b_1,\ldots ,b_{K_n})^T, a=\sum _{s=1}^{K_n} b_sB_s\). It follows from (i) that except an event whose probability tends to zero as \(n\rightarrow \infty \),

$$\begin{aligned} \frac{1}{n}b^TB^TBb=\frac{1}{n}\sum ^n_{i=1}\left( \sum ^{K_n}_{s=1}b_s\langle X_i,B_s\rangle \right) ^2\asymp E\langle X,a\rangle ^2. \end{aligned}$$

By the Cauchy–Schwarz inequality, (28) in Cardot et al. (2003) and (7),

$$\begin{aligned} E\langle X,a\rangle ^2\asymp \Vert a\Vert ^2\asymp \Vert b\Vert _2^2. \end{aligned}$$

Thus, except an event whose probability tends to zero, \(\frac{1}{n}b^TB^TBb\asymp \Vert b\Vert _2^2,\) holds uniformly for all b, which yields the result. \(\square \)

Lemma 2

Under conditions (C1)–(C5), as \(n\rightarrow \infty \), we have

$$\begin{aligned} \frac{\mathbf{Z}^T(I-A)\mathbf{Z}}{n}\mathop {\longrightarrow }\limits ^{P}\Sigma . \end{aligned}$$

Proof

Let \(\mu _j(X_i)=E(Z_{ij}|X_i)=\langle X_i,g_j\rangle , \eta _{ij}=Z_{ij}-\mu _j(X_i)\),

$$\begin{aligned} \tilde{V_j}=\Big (\mu _j(X_1),\ldots ,\mu _j(X_n)\Big )^T,\quad \tilde{\eta _j}=(\eta _{1j},\ldots ,\eta _{nj})^T,\quad j=1,\ldots ,p. \end{aligned}$$

We also define \(V=(\tilde{V_1},\ldots ,\tilde{V_p})\), \(\eta =(\tilde{\eta _1},\ldots ,\tilde{\eta _p})\). Then, \(\mathbf{Z}=\eta +V\) and

$$\begin{aligned} \frac{\mathbf{Z}^T(I-A)\mathbf{Z}}{n}= & {} \frac{(\eta +V)^T(I-A)(\eta +V)}{n}\\= & {} \frac{\eta ^T(I-A)\eta }{n}+\frac{\eta ^T(I-A)V}{n}+\frac{V^T(I-A)\eta }{n} +\frac{V^T(I-A)V}{n}\\= & {} I_1+I_2+I_3+I_4. \end{aligned}$$

For the (jl)th element of \(I_1\)

$$\begin{aligned} (I_1)_{jl}=\frac{\tilde{\eta _j}^T(I-A)\tilde{\eta _l}}{n}=\frac{\tilde{\eta _j}^T \tilde{\eta _l}}{n}-\frac{\tilde{\eta _j}^TA\tilde{\eta _l}}{n}, \quad j,l=1,\ldots ,p. \end{aligned}$$

By independence and the Cauchy–Schwarz inequality, we have

$$\begin{aligned} E\left\{ \frac{\sum _{i=1}^n[\eta _{ij}\eta _{il}-E(\eta _{ij} \eta _{il})]}{n}\right\} ^2= & {} \frac{E(\eta _{1j}\eta _{1l}-E\eta _{1j}\eta _{1l})^2}{n}\\\le & {} \frac{E\eta _{1j}^2\eta _{1l}^2}{n}\\\le & {} \frac{(E\eta _{1j}^4)^{1/2}(E\eta _{1l}^4)^{1/2}}{n}. \end{aligned}$$

Further, by \(C_r\) inequality and (C2)–(C4), we have

$$\begin{aligned} E(\eta _{1j})^4= & {} E\Big (Z_{1j}-\langle X_1,g_j\rangle \Big )^4\le 8\Big (E|Z_{1j}|^4+E|\langle X_1,g_j\rangle |^4\Big )\\\le & {} 8\Big (E|Z_{1j}|^4+E\Vert X_1\Vert ^4\Vert g_j\Vert ^4\Big )<\infty , \quad j=1,\ldots ,p. \end{aligned}$$

Thus,

$$\begin{aligned} \frac{\eta ^T\eta }{n}\mathop {\longrightarrow }\limits ^{P}\Sigma . \end{aligned}$$
(8)

Note that \(A\ge 0\), then we have

$$\begin{aligned} \Big |\frac{\tilde{\eta _j}^TA\tilde{\eta _l}}{n}\Big |\le \Big | \frac{\tilde{\eta _j}^TA\tilde{\eta _j}}{n}\Big |^{1/2}\Big |\frac{\tilde{\eta _l}^TA \tilde{\eta _l}}{n}\Big |^{1/2}. \end{aligned}$$

By Lemma 1, we can know that except an event whose probability tends to zero,

$$\begin{aligned} \frac{\tilde{\eta _j}^TA\tilde{\eta _j}}{n}=\frac{\tilde{\eta _j}^TB(B^TB)^{-1}B^T \tilde{\eta _j}}{n}\asymp \frac{\tilde{\eta _j}^TBB^T\tilde{\eta _j}}{n^2}. \end{aligned}$$

Also note that \(E\langle X_i,B_s\rangle \eta _{ij}=E\langle X_i,B_s\rangle E(\eta _{ij}|X_i)=0\). Then, by (7) and conditions (C2)–(C4), we have there exists a positive constant C such that

$$\begin{aligned} E\tilde{\eta _j}^TBB^T\tilde{\eta _j}= & {} E\left\{ \sum _{s=1}^{K_n}\left[ \sum _{i=1}^n\langle X_i,B_s\rangle \eta _{ij}\right] ^2\right\} \\= & {} n\sum _{s=1}^{K_n} E\langle X_1,B_s\rangle ^2\eta _{1j}^2\\\le & {} n\sum _{s=1}^{K_n} \Vert B_s\Vert ^2\big (E\Vert X_1\Vert ^4\big )^{1/2}\big (E|\eta _{1j}|^4\big )^{1/2}\\\le & {} C n K_n. \end{aligned}$$

Thus, for \(j,l=1,\ldots ,p\),

$$\begin{aligned} \frac{\tilde{\eta _j}^TA\tilde{\eta _l}}{n}=O_p\Big (\frac{K_n}{n}\Big )=o_p(1), \end{aligned}$$

which together with (8) yields

$$\begin{aligned} I_1\mathop {\longrightarrow }\limits ^{P}\Sigma . \end{aligned}$$
(9)

For the (jl)-th element of \(I_4\), \(j,l=1,\ldots ,p\),

$$\begin{aligned} (I_4)_{jl}=\frac{\tilde{V_j}^T(I-A)\tilde{V_l}}{n}, \end{aligned}$$

by Cauchy–Schwartz inequality,

$$\begin{aligned} E\Big |\tilde{V_j}^T(I-A)\tilde{V_l}\Big |\le \Big (E\tilde{V_j}^T(I-A)\tilde{V_j}\Big )^{1/2}\Big (E\tilde{V_l}^T(I-A) \tilde{V_l}\Big )^{1/2}. \end{aligned}$$

It follows from Theorem XII.1 of de Boor (2001) that there exist positive constant \(C_j\) and spline function \(g_j^*\in S_{k,N_n}, j=1,\ldots ,p\) such that

$$\begin{aligned} \Vert g_j-g_j^*\Vert _{\infty }\le C_j h^{k+1}. \end{aligned}$$

Set \(g_j^*=\sum _{s=1}^{K_n} b_{js}^*B_{s}, \quad b_j^*=(b_{j1}^*,\ldots ,b_{jK_n}^*)^T,\quad j=1,\ldots ,p\), then,

$$\begin{aligned} \tilde{V_j}^*=\Big (\langle X_1,g_j^*\rangle ,\ldots ,\langle X_n,g_j^*\rangle \Big )^T=Bb_j^*. \end{aligned}$$

As A is an orthogonal projection matrix,

$$\begin{aligned} E\Big |\tilde{V_j}^T(I-A)\tilde{V_j}\Big |= & {} E\Big |(I-A)\tilde{V_j}\Big |^2\le E\Big |\tilde{V_j}-\tilde{V_j^*}\Big |^2\\\le & {} n E\Vert X_1\Vert ^2\Vert g_j-g_j^*\Vert ^2\lesssim nh^{2(k+1)}. \end{aligned}$$

From the above results and (C1), we have

$$\begin{aligned} \frac{\tilde{V_j}^T(I-A)\tilde{V_l}}{n}=O_p\big (h^{2(k+1)}\big )=o_p(1), \end{aligned}$$

that is,

$$\begin{aligned} I_4\mathop {\longrightarrow }\limits ^{P}0. \end{aligned}$$
(10)

For the (jl)-th element of \(I_2\) and \(I_3\), \(j,l=1,\ldots ,p\), we have

$$\begin{aligned} \frac{|\tilde{\eta _j}^T(I-A)\tilde{V_l}|}{n}\le & {} \frac{(\tilde{\eta _j}^T(I-A)\tilde{\eta _j})^{1/2}(\tilde{V_l}^T(I-A) \tilde{V_l})^{1/2}}{n},\\ \frac{|\tilde{V_j}^T(I-A)\tilde{\eta _l}|}{n}\le & {} \frac{(\tilde{V_j}^T(I-A)\tilde{V_j})^{1/2}(\tilde{\eta _l}^T(I-A) \tilde{\eta _l})^{1/2}}{n}. \end{aligned}$$

Using (9) and (10), we can infer that

$$\begin{aligned} I_2\mathop {\longrightarrow }\limits ^{P}0, \quad I_3\mathop {\longrightarrow }\limits ^{P}0. \end{aligned}$$
(11)

The combination of (9)–(11) allows us to finish the proof of Lemma 2. \(\square \)

Proof of Theorem 1

Denote \(\Phi =\Big (\langle X_1,\alpha \rangle ,\ldots ,\langle X_n,\alpha \rangle \Big )^T\), \(\varepsilon =(\varepsilon _1,\ldots ,\varepsilon _1)^T\). Then, \(Y=\mathbf{Z}\beta +\Phi +\varepsilon \). We can write

$$\begin{aligned} \sqrt{n}(\widehat{\beta }-\beta )= & {} \sqrt{n}\Big [\mathbf{Z}^T(I-A)\mathbf{Z}\Big ]^{-1}{} \mathbf{Z}^T(I-A)\Phi +\sqrt{n}\Big [\mathbf{Z}^T(I-A)\mathbf{Z}\Big ]^{-1}{} \mathbf{Z}^T(I-A)\varepsilon \\= & {} \Delta _1+\Delta _2. \end{aligned}$$

Observe that

$$\begin{aligned} \Delta _1= & {} \Big [\frac{\mathbf{Z}^T(I-A)\mathbf{Z}}{n}\Big ]^{-1}n^{-1/2}\mathbf{Z}^T(I-A)\Phi =\Big [\frac{\mathbf{Z}^T(I-A)\mathbf{Z}}{n}\Big ]^{-1}\Delta _{11},\\ \Delta _2= & {} \Big [\frac{\mathbf{Z}^T(I-A)\mathbf{Z}}{n}\Big ]^{-1}n^{-1/2}{} \mathbf{Z}^T(I-A)\varepsilon =\Big [\frac{\mathbf{Z}^T(I-A)\mathbf{Z}}{n}\Big ]^{-1}\Delta _{21}. \end{aligned}$$

For \(\Delta _{11}\), as \(\mathbf{Z}=\eta +V\),

$$\begin{aligned} \Delta _{11}=n^{-1/2}\eta ^T(I-A)\Phi +n^{-1/2}V^T(I-A)\Phi . \end{aligned}$$
(12)

By (C4) and the Theorem XII.1 of de Boor (2001), we know that there is a spline function \(\alpha ^*=\sum _{s=1}^{K_n} b_s^*B_s\in S_{k,N_n}\) and positive constant C such that

$$\begin{aligned} \Vert \alpha -\alpha ^*\Vert _{\infty }\le Ch^{k+1}. \end{aligned}$$
(13)

Set \(\Phi ^*=(\langle X_1,\alpha ^*\rangle ,\ldots ,\langle X_n,\alpha ^*\rangle )^T\) and \(b^*=(b_1^*,\ldots ,b_{K_n}^*)^T\), we have \(\Phi ^*=Bb^*\). For \(j=1,\ldots ,p\), by conditions (C1), (C2), (C4) and Theorem XII.1 of de Boor (2001), we can infer

$$\begin{aligned}&E\Big |\tilde{V_j}^T(I-A)\Phi \Big |=E\Big |(\tilde{V_j}-\tilde{V_j^*})^T(I-A) (\Phi -\Phi ^*)\Big |\\&\quad \le E\Big \{\Big |(\tilde{V_j}-\tilde{V_j^*})^T(I-A)(\tilde{V_j}-\tilde{V_j^*}) \Big |^{1/2}\Big | (\Phi -\Phi ^*)^T(I-A)(\Phi -\Phi ^*)\Big |^{1/2}\Big \}\\&\quad \le \left( E\sum _{i=1}^n\langle X_i,g_j-g_j^*\rangle ^2\right) ^{1/2} \left( E\sum _{i=1}^n\langle X_i,\alpha -\alpha ^*\rangle ^2\right) ^{1/2}\\&\quad \lesssim nh^{2(k+1)}. \end{aligned}$$

Thus, by (C1) we have

$$\begin{aligned} n^{-1/2}V^T(I-A)\Phi =O_p(n^{1/2}h^{2(k+1)})=o_p(1). \end{aligned}$$
(14)

Observe that for \(j=1,\ldots ,p\),

$$\begin{aligned} \Big |n^{-1/2}\tilde{\eta _j}^T(I-A)\Phi \Big |= & {} \Big |n^{-1/2}\tilde{\eta _j}^T(I-A)(\Phi -\Phi ^*)\Big |\\\le & {} \Big |n^{-1/2}\tilde{\eta _j}^T(\Phi -\Phi ^*)\Big |+\Big |n^{-1/2} \tilde{\eta _j}^TA(\Phi -\Phi ^*)\Big |\\\le & {} n^{-1/2}\Big |\sum _{i=1}^n{\eta _{ij}}\langle X_i,\alpha -\alpha ^*\rangle \Big |\\&+\,n^{-1/2}\Big |\tilde{\eta _j}^TA\tilde{\eta _j}\Big |^{1/2} \Big |\sum _{i=1}^n\langle X_i,\alpha -\alpha ^*\rangle ^2\Big |^{1/2}\\\triangleq & {} I_{j1}+I_{j2}. \end{aligned}$$

As \(E\Big (\eta _{ij}\langle X_i,\alpha -\alpha ^*\rangle \Big )=E\Big [\langle X_i,\alpha -\alpha ^*\rangle E(\eta _{ij}|X_i)\Big ]=0\) and

$$\begin{aligned} E\Big |\sum _{i=1}^n \eta _{ij}\langle X_i,\alpha -\alpha ^*\rangle \Big |^2= & {} nE\eta _{1j}^2\langle X_1,\alpha -\alpha ^*\rangle ^2\\\le & {} n \Vert \alpha -\alpha ^*\Vert ^2(E\eta _{1j}^4)^{1/2}(E\Vert X_1\Vert ^4)^{1/2}\\\lesssim & {} nh^{2(k+1)}, \end{aligned}$$

we can infer

$$\begin{aligned} I_{j1}=O_p(h^{k+1})=o_p(1). \end{aligned}$$
(15)

Further, under the Lemma 1, (C1) and (13), we can show

$$\begin{aligned} I_{j2}=O_p(K_n^{1/2}h^{k+1})=O_p(n^{-r(k+\frac{1}{2})})=o_p(1). \end{aligned}$$
(16)

By (12), (14)–(16) and Lemma 2, we have

$$\begin{aligned} \Delta _1\mathop {\longrightarrow }\limits ^{P}0. \end{aligned}$$
(17)

\(\Delta _{21}\) can be expressed as

$$\begin{aligned} \Delta _{21}=n^{-1/2}\eta ^T(I-A)\varepsilon +n^{-1/2}V^T(I-A)\varepsilon \triangleq R_1+R_2. \end{aligned}$$

Let \(\epsilon _i=\eta _i \varepsilon _i\). Since \(\varepsilon _i\) is independent of \((X_i,\mathbf{Z}_i)\) and \((X_i,\mathbf{Z}_i,Y_i)\) is i.i.d. sequence, the \(\epsilon _i\) are i.i.d. random variables with \(E\epsilon _i=0\) and \(Var(\epsilon _i)=\sigma ^2\Sigma \).

Observe that

$$\begin{aligned} R_1= & {} n^{-1/2}\eta ^T\varepsilon -n^{-1/2}\eta ^TA\varepsilon =n^{-1/2}\sum _{i=1}^n \eta _i \varepsilon _i-n^{-1/2}\eta ^TA\varepsilon \\= & {} n^{-1/2}\sum _{i=1}^n \epsilon _i-n^{-1/2}\eta ^TA\varepsilon . \end{aligned}$$

Then, by the central limit theorem,

$$\begin{aligned} n^{-1/2}\sum _{i=1}^n \epsilon _i\mathop {\longrightarrow }\limits ^{D}N\big (0,\sigma ^2\Sigma \big ). \end{aligned}$$
(18)

Also note that

$$\begin{aligned} \Big |n^{-1/2}\tilde{\eta _j}^TA\varepsilon \Big |\le n^{-1/2}\Big (\tilde{\eta _j}^TA\tilde{\eta _j}\Big )^{1/2} \Big (\varepsilon ^TA\varepsilon \Big )^{1/2}. \end{aligned}$$

Then, it follows from Lemma 1 that

$$\begin{aligned} \varepsilon ^TA\varepsilon =\varepsilon ^TB(B^TB)^{-1}B^T\varepsilon \asymp \frac{\varepsilon ^TBB^T\varepsilon }{n}. \end{aligned}$$

Since \(E\langle X_i,B_s\rangle \varepsilon _i\langle X_j,B_s\rangle \varepsilon _j=0, i\ne j\), we have

$$\begin{aligned} E\varepsilon ^TBB^T\varepsilon= & {} E\sum _{s=1}^{K_n}\left( \sum _{i=1}^n \varepsilon _i\langle X_i,B_s\rangle \right) ^2=\sum _{s=1}^{K_n}\sum _{i=1}^nE\varepsilon _i^2\langle X_i,B_s\rangle ^2\\\le & {} n\sigma ^2\sum _{s=1}^{K_n} E\Vert X_1\Vert ^2\Vert B_s\Vert ^2\lesssim n K_n, \end{aligned}$$

that is, \(\varepsilon ^TA\varepsilon =O_p(K_n)\). In addition, we can know from the proof of Lemma 2 that

$$\begin{aligned} \frac{\tilde{\eta _j}^TA\tilde{\eta _j}}{n}=O_p\Big (\frac{K_n}{n}\Big ). \end{aligned}$$

Thus,

$$\begin{aligned} n^{-1/2}\eta ^TA\varepsilon =O_p(K_n n^{-1/2})=o_p(1), \end{aligned}$$

which, together with (18), yields

$$\begin{aligned} R_1\mathop {\longrightarrow }\limits ^{D}N\big (0,\sigma ^2\Sigma \big ). \end{aligned}$$
(19)

For the jth element of \(R_2\), \(j=1,\ldots ,p\), we have

$$\begin{aligned} \Big |n^{-1/2}\tilde{V_j}^T(I-A)\varepsilon \Big |= & {} \Big |n^{-1/2} (\tilde{V_j}-\tilde{V_j^*})^T(I-A)\varepsilon \Big |\\\le & {} \Big |n^{-1/2}(\tilde{V_j}-\tilde{V_j^*})^T\varepsilon \Big | +\Big |n^{-1/2}(\tilde{V_j}-\tilde{V_j^*})^TA\varepsilon \Big |\\\le & {} \Big |n^{-1/2}(\tilde{V_j}-\tilde{V_j^*})^T\varepsilon \Big | +n^{-1/2}\Big |(\tilde{V_j}-\tilde{V_j^*})^T(\tilde{V_j}-\tilde{V_j^*}) \Big |^{1/2}\Big |\varepsilon A\varepsilon \Big |^{1/2}\\\triangleq & {} J_{j1}+J_{j2}. \end{aligned}$$

Since \(\varepsilon _i\) is independent \((X_i,\mathbf{Z}_i)\), we have

$$\begin{aligned} E J_{j1}^2=n^{-1}\sum _{i=1}^n E\varepsilon _i^2\langle X_i,g_j-g_j^*\rangle ^2\le \sigma ^2E\Vert X_1\Vert ^2\Vert g_j-g_j^*\Vert ^2\lesssim h^{2(k+1)}. \end{aligned}$$

Then,

$$\begin{aligned} J_{j1}=O_p\big (h^{k+1}\big )=o_p(1). \end{aligned}$$

Also, observe that

$$\begin{aligned} E\sum _{i=1}^n\langle X_i,g_j-g_j^*\rangle ^2=nE\langle X_1,g_j-g_j^*\rangle ^2\le n E\Vert X_1\Vert ^2\Vert g_j-g_j^*\Vert ^2\lesssim nh^{2k+2}. \end{aligned}$$

Then, by (C1), we have

$$\begin{aligned} J_{j2}=O_p\big (K_n^{1/2}h^{k+1}\big )=O_p\big (n^{-r(k+1/2)}\big )=o_p(1). \end{aligned}$$

From the above results, we can infer

$$\begin{aligned} R_2\mathop {\longrightarrow }\limits ^{P}0. \end{aligned}$$
(20)

Now, by Lemma 2, (17), (19), (20) and Slutsky theorem, we can obtain the Theorem 1. \(\square \)

Proof of Theorem 2

Observe that

$$\begin{aligned} \widehat{b}=(B^TB)^{-1}B^T(Y-\mathbf{Z}\widehat{\beta })=(B^TB)^{-1}B^T\big (\mathbf{Z}(\beta -\widehat{\beta })+\Phi +\varepsilon \big ). \end{aligned}$$

Let \(\tilde{Y}=\mathbf{Z}(\beta -\widehat{\beta })+\Phi \). Denote \(\tilde{b}=(B^TB)^{-1}B^T\tilde{Y}\) and \(\tilde{\alpha }(t)=\sum _{s=1}^{K_n} \tilde{b_s}B_s(t)\), where \(\tilde{b}=(\tilde{b_1},\ldots ,\tilde{b_{K_n}})^T\). Then, \(\widehat{b}-\tilde{b}=(B^TB)^{-1}B^T\varepsilon \). By Lemma 1, we have

$$\begin{aligned} \Vert \widehat{b}-\tilde{b}\Vert _2^2=\varepsilon ^T B(B^TB)^{-1}(B^TB)^{-1}B^T\varepsilon \asymp \frac{\varepsilon ^T BB^T\varepsilon }{n^2}, \end{aligned}$$

except on an event whose probability tends to zero as \(n\rightarrow \infty \). Thus, by (7), we can infer

$$\begin{aligned} \Vert \widehat{\alpha }-\tilde{\alpha }\Vert ^2\asymp \Vert \widehat{b}-\tilde{b}\Vert _2^2=O_p \Big (\frac{K_n}{n}\Big ). \end{aligned}$$
(21)

Also, it follows from the Theorem XII.1 of de Boor (2001) that there exists a spline function \(\alpha ^*(t)=\sum _{s=1}^{K_n} b_s^*B_s(t)\in S_{k,N_n}\) where \(b^*=(b_1^*,\ldots ,b_{K_n}^*)^T\) and constant \(C>0\) such that

$$\begin{aligned} \Vert \alpha ^*-\alpha \Vert \le \Vert \alpha ^*-\alpha \Vert _{\infty }\le Ch^{k+1}. \end{aligned}$$
(22)

By the Theorem XII.1 of de Boor (2001) and (7), we have

$$\begin{aligned} \Big \Vert \tilde{\alpha }-\alpha ^*\Big \Vert ^2\asymp \Big \Vert \tilde{b}-b^*\Big \Vert _2^2\asymp \frac{(\tilde{b}-b^*)^TB^TB(\tilde{b}-b^*)}{n}=\frac{\Vert B\tilde{b}-Bb^*\Vert _2^2}{n}. \end{aligned}$$

Observe that \(B\tilde{b}=B(B^TB)^{-1}B^T\tilde{Y}\) and \(B(B^TB)^{-1}B^T\) is an orthogonal projection matrix. Thus,

$$\begin{aligned} \frac{\Vert B\tilde{b}-Bb^*\Vert _2^2}{n}\le & {} \frac{\Vert \tilde{Y}-Bb^*\Vert _2^2}{n}=\frac{\big \Vert \mathbf{Z}(\beta -\widehat{\beta })+(\Phi -\Phi ^*)\big \Vert _2^2}{n}\\\le & {} \frac{2}{n}\Big [\Vert \mathbf{Z}(\beta -\widehat{\beta })\Vert _2^2+\Vert \Phi -\Phi ^*\Vert _2^2\Big ]\\= & {} \frac{2}{n} \sum _{i=1}^n\left\{ \sum _{j=1}^p Z_{ij}(\beta _j-\widehat{\beta }_j)\right\} ^2+\frac{2}{n} \sum _{i=1}^n \langle X_i,\alpha -\alpha ^*\rangle ^2. \end{aligned}$$

Applying (C2), (22) and the Cauchy–Schwarz inequality, we obtain that

$$\begin{aligned} E\langle X_1,\alpha -\alpha ^*\rangle ^2\le E\Vert X_1\Vert ^2\Vert \alpha -\alpha ^*\Vert ^2\le Ch^{2(k+1)}, \end{aligned}$$

that is,

$$\begin{aligned} \frac{\sum _{i=1}^n \langle X_i,\alpha -\alpha ^*\rangle ^2}{n}=O_p\big (h^{2(k+1)}\big ). \end{aligned}$$
(23)

In addition, note that

$$\begin{aligned} \frac{1}{n} \sum _{i=1}^n\left\{ \sum _{j=1}^p Z_{ij}(\beta _j-\widehat{\beta }_j)\right\} ^2\le p\sum _{j=1}^p\big (\beta _j-\widehat{\beta }_j\big )^2\frac{\sum _{i=1}^n Z_{ij}^2}{n}. \end{aligned}$$

Then, it follows from Theorem 1 and (C4) that

$$\begin{aligned} \frac{1}{n} \sum _{i=1}^n\left\{ \sum _{j=1}^p Z_{ij}(\beta _j-\widehat{\beta }_j)\right\} ^2=O_p(n^{-1}), \end{aligned}$$
(24)

which together with (23) yields

$$\begin{aligned} \Vert \tilde{\alpha }-\alpha ^*\Vert ^2=O_p\big (n^{-1}+h^{2(k+1)}\big ). \end{aligned}$$
(25)

Further, we can infer that

$$\begin{aligned} \Vert \widehat{\alpha }-\alpha \Vert ^2\le 3\Big (\Vert \widehat{\alpha }-\tilde{\alpha }\Vert ^2+\Vert \tilde{\alpha }-\alpha ^*\Vert ^2 +\Vert \alpha ^*-\alpha \Vert ^2\Big ). \end{aligned}$$
(26)

Then, the combination of (21), (22), (25) and (26) allows us to complete the proof of Theorem 2. \(\square \)

Proof of Theorem 3

We can write

$$\begin{aligned} \sqrt{n}(\widehat{\sigma }_n^2-\sigma ^2)= & {} \sqrt{n}\left\{ \frac{\sum _{i=1}^n\big (Y_i-\langle X_i,\widehat{\alpha }\rangle -\mathbf{Z}_i^T\widehat{\beta }\big )^2}{n}-\sigma ^2\right\} \\= & {} \sqrt{n}\left\{ \frac{\sum _{i=1}^n\big [\langle X_i,\alpha -\widehat{\alpha }\rangle +\mathbf{Z}_i^T(\beta -\widehat{\beta })+\varepsilon \big ]^2}{n}-\sigma ^2\right\} \\= & {} n^{-1/2}\sum _{i=1}^n\langle X_i,\alpha -\widehat{\alpha }\rangle ^2+n^{-1/2}\sum _{i=1}^n (\beta -\widehat{\beta })^T\mathbf{Z}_i\mathbf{Z}_i^T(\beta -\widehat{\beta })\\&+\,n^{-1/2}\sum _{i=1}^n(\varepsilon _i^2-\sigma ^2)+2n^{-1/2}\sum _{i=1}^n\langle X_i,\alpha -\widehat{\alpha }\rangle \varepsilon _i\\&+\,2n^{-1/2}\sum _{i=1}^n \varepsilon _i\mathbf{Z}_i^T(\beta -\widehat{\beta })+2n^{-1/2}\sum _{i=1}^n \langle X_i,\alpha -\widehat{\alpha }\rangle \mathbf{Z}_i^T(\beta -\widehat{\beta })\\\triangleq & {} R_{n1}+R_{n2}+R_{n3}+R_{n4}+R_{n5}+R_{n6}. \end{aligned}$$

Observe that

$$\begin{aligned} R_{n1}=n^{-1/2}\sum _{i=1}^n \langle X_i,\alpha -\widehat{\alpha }\rangle ^2\le \Vert \alpha -\widehat{\alpha }\Vert ^2 n^{-1/2}\sum _{i=1}^n \Vert X_i\Vert ^2. \end{aligned}$$

Then, by (C1), (C2), theorem 2, we have

$$\begin{aligned} R_{n1}=O_p\big (K_nn^{-1/2}+n^{1/2}h^{2(k+1)}\big )=O_p \big (n^{r-1/2}+n^{1/2-2r(k+1)}\big )=o_p(1).\nonumber \\ \end{aligned}$$
(27)

It follows from (24) that

$$\begin{aligned} R_{n2}=O_p\big (n^{-1/2}\big )=o_p(1). \end{aligned}$$
(28)

For \(R_{n3}\), since \(E(\varepsilon _1^2-\sigma ^2)=0\) and \(\Lambda ^2=E(\varepsilon _1^2-\sigma ^2)^2<\infty \), it follows from the central limit theorem that

$$\begin{aligned} R_{n3}\mathop {\longrightarrow }\limits ^{D} N(0,\Lambda ^2). \end{aligned}$$
(29)

For \(R_{n4}\), we have

$$\begin{aligned} |R_{n4}|=2n^{1/2}\Big |\langle \frac{\sum _{i=1}^n X_i\varepsilon _i}{n},\alpha -\widehat{\alpha }\rangle \Big |\le 2n^{1/2}\Big \Vert \frac{\sum _{i=1}^n X_i\varepsilon _i}{n}\Big \Vert \Big \Vert \alpha -\widehat{\alpha }\Big \Vert . \end{aligned}$$

Then, applying (C1), (C2) and Theorem 2, we can obtain

$$\begin{aligned} R_{n4}=O_p\big (K_n^{1/2}n^{-1/2}+h^{k+1}\big )=o_p(1). \end{aligned}$$
(30)

Note that

$$\begin{aligned} R_{n5}=2n^{1/2}\frac{\sum _{i=1}^n\varepsilon _i \mathbf{Z}_i^T(\beta -\widehat{\beta })}{n}=2n^{1/2} \sum _{j=1}^p(\beta _j-\widehat{\beta }_j)\frac{\sum _{i=1}^n\varepsilon _i Z_{ij}}{n}. \end{aligned}$$

Thus, using (C3) and Theorem 1, we have

$$\begin{aligned} R_{n5}=O_p\big (n^{-1/2}\big )=o_p(1). \end{aligned}$$
(31)

Also, observe that

$$\begin{aligned} |R_{n6}|= & {} 2n^{1/2}\frac{\big |\sum _{i=1}^n \langle X_i,\alpha -\widehat{\alpha }\rangle \mathbf{Z}_i^T(\beta -\widehat{\beta })\big |}{n}\\\le & {} 2n^{1/2}\sum _{j=1}^p|\beta _j-\widehat{\beta }_j| \Vert \alpha -\widehat{\alpha }\Vert \frac{\sum _{i=1}^n\Vert X_i\Vert |Z_{ij}|}{n}. \end{aligned}$$

Then, by (C1)–(C3), Theorem 1 and 2, we can get

$$\begin{aligned} R_{n6}=O_p\big (K_n^{1/2}n^{-1/2}+h^{k+1}\big )=o_p(1). \end{aligned}$$
(32)

Finally, using (27)–(32), we can complete the proof of Theorem 3. \(\square \)

Proof of Corollary 1

It follows form Theorem 1 that

$$\begin{aligned} \frac{\sqrt{n}}{\sigma }\Sigma ^{\frac{1}{2}}(\widehat{\beta }-\beta )\mathop {\longrightarrow }\limits ^{D} N(0,I_p). \end{aligned}$$

Also, by Lemma 2 and Theorem 3, we have that

$$\begin{aligned} \widehat{\Sigma }_n\mathop {\longrightarrow }\limits ^{P} \Sigma ,\quad \quad \widehat{\sigma }^2_n\mathop {\longrightarrow }\limits ^{P}\sigma ^2. \end{aligned}$$

Then, by the Slutsky theorem, we obtain the Corollary 1. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, J., Chen, Z. & Peng, Q. Polynomial spline estimation for partial functional linear regression models. Comput Stat 31, 1107–1129 (2016). https://doi.org/10.1007/s00180-015-0636-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-015-0636-0

Keywords

Navigation