Abstract
We propose a new estimation procedure for estimating the unknown parameters and function in partial functional linear regression. The asymptotic distribution of the estimator of the vector of slope parameters is derived, and the global convergence rate of the estimator of unknown slope function is established under suitable norm. The convergence rate of the mean squared prediction error for the proposed estimators is also established. Based on the proposed estimation procedure, we further construct the penalized regression estimators and establish their variable selection consistency and oracle properties. Finite sample properties of our procedures are studied through Monte Carlo simulations. A real data example about the real estate data is used to illustrate our proposed methodology.
Similar content being viewed by others
References
Aneirosa, G., Ferraty, F., Vieu, P.: Variable selection in partial linear regression with functional covariate. Statistics 49, 1322–1347 (2015)
Brunel, E., Roche, A.: Penalized contrast estimation in functional linear models with circular data. Statistics 49, 1298–1321 (2015)
Cai, T.T., Hall, P.: Prediction in functional linear regression. Ann. Stat. 34, 2159–2179 (2006)
Cardot, H., Ferraty, F., Sarda, P.: Spline estimators for the functional linear model. Stat. Sin. 13, 571–591 (2003)
Cardot, H., Mas, A., Sarda, P.: CLT in functional linear models. Probab. Theory Relat. Fields 138, 325–361 (2007)
Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 96, 1348–1360 (2001)
Fan, J., Lv, J.: Non-concave penalized likelihood with np-dimensionality. IEEE Trans. Inf. Theory 57, 5467–5484 (2011)
Fan, J., Xue, L., Zou, H.: Strong oracle optimality of folded concave penalized estimation. Ann. Stat. 42, 819–849 (2014)
Frank, I., Friedman, J.: A statistical view of some chemometrics regression tools (with discussion). Technometrics 35, 109–135 (1993)
Hall, P., Horowitz, J.L.: Methodology and convergence rates for functional linear regression. Ann. Stat. 35, 70–91 (2007)
Hsing, T., Eubank, R.: Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators. Wiley, New York (2015)
Liang, H., Li, R.: Variable selection for partially linear models with measurement errors. J. Am. Stat. Assoc. 104, 234–248 (2009)
Lv, J., Fan, J.: A unified approach to model selection and sparse recovery using regularized least squares. Ann. Stat. 37, 3498–3528 (2009)
Ramsay, J.O., Silverman, B.W.: Applied Functional Data Analysis: Methods and Case Studies. Springer, New York (2002)
Ramsay, J.O., Silverman, B.W.: Functional Data Analysis. Springer, New York (2005)
Reiss, P.T., Ogden, R.T.: Functional generalized linear models with images as predictors. Biometrics 66, 61–69 (2010)
Shin, H.: Partial functional linear regression. J. Stat. Plan. Inference 139, 3405–3418 (2009)
Shin, H., Lee, M.H.: On prediction rate in partial functional linear regression. J. Multivar. Anal. 103, 93–106 (2012)
Tang, Q.: Estimation for semi-functional linear regression. Statistics 49, 1262–1278 (2015)
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. B 58, 267–288 (1996)
Wang, M., Wang, X.: Adaptive Lasso estimators for ultrahigh dimensional generalized linear models. Stat. Prob. Lett. 89, 41–50 (2014)
Zhang, C.H.: Nearly unbiased variable selection under mini-max concave penalty. Ann. Stat. 38, 894–942 (2010)
Zhang, D., Lin, X., Sowers, M.F.: Two-stage functional mixed models for evaluating the effect of longitudinal covariate profiles on a scalar outcome. Biometrics 63, 351–362 (2007)
Zou, H.: The adaptive lasso and its oracle properties. J. Am. Stat. Assoc. 101, 1418–1429 (2006)
Acknowledgements
This work was supported by the National Social Science Foundation of China (16BTJ019), the Humanities and Social Science Foundation of Ministry of Education of China (14YJA910004) and Natural Science Foundation of Jiangsu Province of China (Grant No. BK20151481).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Proofs
Appendix: Proofs
In this section, let \(C>0\) denote a generic constant of which the value may change from line to line. For a matrix \(A=(a_{ij})\), set \(\Vert A\Vert _{\infty }=\max _{i}\sum _{j}|a_{ij}|\) and \(|A|_{\infty }=\max _{i,j}|a_{ij}|\). For a vector \(v=(v_{1},\ldots ,v_{k})^{T}\), set \(\Vert v\Vert _{\infty }=\sum _{j=1}^{k}|v_{j}|\) and \(|v|_{\infty }=\max _{1\le j\le k}|v_{j}|\). Denote \(W_{l}=\sum _{j=1}^{\infty }\gamma _{j}\xi _{lj}\), \(\tilde{W}_{i}=W_{i}-\frac{1}{n}\sum _{l=1}^{n}W_{l}\tilde{\xi }_{li}\), \(\tilde{\varepsilon }_{i}=\varepsilon _{i}-\frac{1}{n}\sum _{l=1}^{n}\varepsilon _{l}\tilde{\xi }_{li}\) and \(\tilde{W}=(\tilde{W}_{1},\ldots ,\tilde{W}_{n})^{T}\), \(\tilde{\varepsilon }=(\tilde{\varepsilon }_{1},\ldots ,\tilde{\varepsilon }_{n})^{T}\). Then
Lemma A.1
Suppose that Assumptions 1, 2, 4 and 5 hold, then it holds that
Proof
Let \(\tilde{Z}_{i}=(\tilde{Z}_{i1},\ldots ,\tilde{Z}_{id})^{T}\). Set \(\vec {\xi }_{li}=\sum _{j=1}^{m}\frac{\xi _{lj} \xi _{ij}}{\lambda _{j}}\), \(\vec {Z}_{ir1}=Z_{ir}-\frac{1}{n}\sum _{l=1}^{n}Z_{lr}\vec {\xi }_{li}\) and \(\vec {Z}_{ir2}=\frac{1}{n}\sum _{l=1}^{n}Z_{lr}(\tilde{\xi }_{li}-\vec {\xi }_{li}).\) Then \(\tilde{Z}_{ir}=\vec {Z}_{ir1}-\vec {Z}_{ir2}\) and
Let \(\vec {Z}_{ir21}=\sum _{j=1}^{m}\frac{1}{\lambda _{j}} \left[ \frac{1}{n}\sum _{l=1}^{n}Z_{lr}(\hat{\xi }_{lj}-\xi _{lj})\right] \xi _{ij}\), \(\vec {Z}_{ir22}=\sum _{j=1}^{m}\left( \frac{1}{\hat{\lambda }_{j}} -\frac{1}{\lambda _{j}}\right) \left( \frac{1}{n}\sum _{l=1}^{n}Z_{lr}\hat{\xi }_{lj}\right) \xi _{ij}\) and \(\vec {Z}_{ir23}=\sum _{j=1}^{m}\frac{1}{\hat{\lambda }_{j}} \left( \frac{1}{n}\sum _{l=1}^{n}Z_{lr}\hat{\xi }_{lj}\right) (\hat{\xi }_{ij}-\xi _{ij}).\) We then have
Lemma 5.1 of Hall and Horowitz (2007) implies that
where \(\Delta =\hat{K}-K\). We then obtain that
where \(\vec {\xi }_{rj}=\frac{1}{n}\sum _{l=1}^{n}Z_{lr}\xi _{lj}\). Lemma 1 of Cardot et al. (2007) implies that
uniformly for \(1\le j\le m\). By (5.2) of Hall and Horowitz (2007), it holds that \(\sup _{j\ge 1}|\hat{\lambda }_{j}-\lambda _{j}|\le |\Vert \Delta \Vert |=O_{p}(n^{-1/2})\) and
where \(|\Vert \Delta \Vert |=(\int _{{\mathcal {T}}}\int _{{\mathcal {T}}}\Delta ^{2}(s,t)\mathrm{d}s\mathrm{d}t)^{1/2}\). Using Parseval’s identity, we get that
Assumption 4 implies that \(|\hat{\lambda }_{j}-\lambda _{j}|=o_{p}(\lambda _{m}/m)\). Consequently, \(\sum _{k\ne j}\frac{\vec {\xi }_{rk}^{2}}{(\hat{\lambda }_{j}-\lambda _{k})^{2}}=\sum _{k\ne j}\frac{\vec {\xi }_{rk}^{2}}{(\lambda _{j}-\lambda _{k})^{2}}[1+o_{p}(1)]\), where \(o_{p}(1)\) holds uniformly for \(1\le j\le m\). By arguments similar to those used in the proof of Lemma 2 of Cardot et al. (2007) and use the fact that \((\lambda _{j}-\lambda _{k})^{2}\ge (\lambda _{k}-\lambda _{k+1})^{2}\), we deduce that
Lemma 1 of Cardot et al. (2007) yields that
and \(\sum _{j=1}^{m}\lambda _{j}^{-1}\le \lambda _{m}^{-1}m\). Therefore,
and
Decomposing \(\frac{1}{n}\sum _{l=1}^{n}Z_{lr}\hat{\xi }_{lj}=\vec {\xi }_{rj}+\frac{1}{n}\sum _{l=1}^{n}Z_{lr}(\hat{\xi }_{lj} -\xi _{lj})\) and using (A.6), we get
By (A.10) of Tang (2015), it holds that
where \(O_{p}(\cdot )\) holds uniformly for \(1\le j\le m\). Using (A.8) and (A.9), we obtain
Hence, by (A.3), (A.7), (A.8), and (A.10) and Assumption 4, we conclude that
Define \(\check{\xi }_{jr}=\frac{1}{n}\sum _{l=1}^{n}\lambda _{j}^{-1/2}\xi _{lj}Z_{lr}\). Since \(E[\max _{1\le j\le m}(\check{\xi }_{jr}-E(\check{\xi }_{jr}))^{2}]\le \frac{1}{n}\sum _{j=1}^{m}\lambda _{j}^{-1}E(\xi _{j}Z_{r})^{2}\le Cn^{-1}\), we then have \(\max _{1\le j\le m}|\check{\xi }_{jr}-E(\check{\xi }_{jr})|=O_{p}(n^{-1/2})\). Hence
where \(\bar{\xi }_{jj'}=\frac{1}{n(\lambda _{j}\lambda _{j'})^{1/2}}\sum _{i=1}^{n}\xi _{ij}\xi _{ij'}\). Now Lemma A.1 follows from (A.2), (A.11), (A.12) and the fact that \(\frac{1}{n}|\sum _{i=1}^{n}\vec {Z}_{ir1}\vec {Z}_{iq2}| \le \left( \frac{1}{n}\sum _{i=1}^{n}\vec {Z}_{ir1}^{2}\right) ^{1/2} \left( \frac{1}{n}\sum _{i=1}^{n}\vec {Z}_{iq2}^{2}\right) ^{1/2}\). \(\square \)
Lemma A.2
Under Assumptions 1–4, it holds that
Proof
Set \(S_{1}=\sum _{j=1}^{m}\lambda _{j}\left[ \gamma _{j}-\frac{1}{\lambda _{j}} \left( \frac{1}{n}\sum _{l=1}^{n}W_{l}\xi _{lj}\right) \right] ^{2}\), \(S_{2}=\sum _{j=1}^{m}\frac{1}{\lambda _{j}} \left[ \frac{1}{n}\sum _{l=1}^{n}W_{l}(\hat{\xi }_{lj}-\xi _{lj})\right] ^{2}\) and \(S_{3}=\sum _{j=1}^{m}\lambda _{j} \left( \frac{1}{\hat{\lambda }_{j}}-\frac{1}{\lambda _{j}}\right) ^{2} \left( \frac{1}{n}\sum _{l=1}^{n}W_{l}\hat{\xi }_{lj}\right) ^{2}\). We have
Since \(E\left[ \gamma _{j}-\frac{1}{\lambda _{j}}\left( \frac{1}{n}\sum _{l=1}^{n}W_{l}\xi _{lj}\right) \right] =0\), then by Assumptions 1–3, we obtain that
Similar to the proof of (A.6) and (A.8) and using Assumption 4, we deduce that
and
Now Lemma A.2 follows from (A.13)–(A.16). \(\square \)
Lemma A.3
Under Assumptions 1, 2, 4 and 5, it holds that
Proof
Let \(Z_{ir}^{*}=Z_{ir}-\sum _{j'=1}^{m}\frac{1}{\lambda _{j'}} \left( \frac{1}{n}\sum _{l=1}^{n}Z_{lr}\xi _{lj'}\right) \xi _{ij'}\). Observe that
By direct computation and using Assumption 1, we get
and
Hence
Since \(\sum _{j'=1}^{m}\frac{1}{\lambda _{j'}}E\left( \sum _{i=1}^{n}\xi _{ij}\xi _{ij'}\right) ^{2}\le Cn^{2}\lambda _{j}\), then by (A.6), we have
Similar to the proof (A.8) and using Assumption 4, we deduce that
and
Now Lemma A.3 follows from (A.17)–(A.21) and Assumption 4. \(\square \)
Lemma A.4
Under Assumptions 1–5, it holds that
Proof
Let \(\breve{W}_{j}=\frac{1}{n}\sum _{l=1}^{n}W_{l}\hat{\xi }_{lj}\). Applying the Cauchy–Schwarz inequality, we get
Using (A.4), (A.5), Assumption 4 and Parseval’s identity and the arguments similar to those used to prove Lemma A.3, we deduce that
Let \(\vec {W}_{j}=\frac{1}{n}\sum _{l=1}^{n}W_{l}\xi _{lj}\). Decomposing \(\frac{1}{n}\sum _{l=1}^{n}W_{l}\hat{\xi }_{lj}=\vec {W}_{j}+\frac{1}{n}\sum _{l=1}^{n}W_{l}(\hat{\xi }_{lj} -\xi _{lj})\) and using arguments similar to those used in the proof of (A.6) and using Assumption 4 , we obtain that
This finishes the proof of Lemma A.4. \(\square \)
Lemma A.5
Under Assumptions 1–5, it holds that
Proof
Observe that
Lemmas A.2 and A.3 and Assumption 4 imply that
By arguments similar to those used in the proof of Lemma A.3, we obtain that
Now Lemma A.5 follows from (A.22)–(A.24) and Lemma A.4. \(\square \)
Proof of Theorem 2.1
By arguments similar to those used to prove Lemmas A.4 and A.5, we deduce that \(n^{-1/2}\sum _{i=1}^{n}\left( \frac{1}{n}\sum _{l=1}^{n}\varepsilon _{l}\tilde{\xi }_{li}\right) \tilde{Z}_{ir}=o_{p}(1)\). Hence
We decompose \(\sum _{i=1}^{n}\tilde{Z}_{ir}\varepsilon _{i}\) into three terms as
Similar to the proof of Lemma A.4, we have \(\sum _{i=1}^{n}\varepsilon _{i} \frac{1}{n}\sum _{l=1}^{n}Z_{lr}(\tilde{\xi }_{li}-\vec {\xi }_{li})=o_{p}(n)\). Since
\(\sum _{i=1}^{n}\varepsilon _{i}\sum _{j=1}^{m}\frac{\xi _{ij}}{\lambda _{j} }\left( \frac{1}{n}\sum _{l=1}^{n}Z_{lr}\xi _{lj}-E(Z_{lr}\xi _{j})\right) =o_{p}(n)\) and \(\sum _{i=1}^{n}\varepsilon _{i}\sum _{j=m+1}^{\infty }g_{kj}\xi _{ij} =o_{p}(n)\), it follows that
Now (2.9) follows from (A.1), Lemmas A.1 and A.5, (A.25) and the central limit theorem. The proof of Theorem 2.1 is finished. \(\square \)
Lemma A.6
Define \(\check{\gamma }_{j}=\frac{1}{\hat{\lambda }_{j} }E[(Y-Z^{T}\pmb {\beta }_{0})\xi _{j}]\). Under the assumptions of Theorem 3.2, it holds that
Proof
Define \(I_{1}=\frac{1}{n}\sum _{i=1}^{n} (Y_{i}-Z_{i}^{T}\pmb {\beta }_{0})\xi _{ij} -\gamma _{j}\lambda _{j}\), \(I_{2}=\frac{1}{n}\sum _{i=1}^{n}(Y_{i}-Z_{i}^{T}\pmb {\beta }_{0})(\hat{\xi }_{ij}-\xi _{ij})\) and \(I_{3}=\frac{1}{n}\sum _{i=1}^{n}Z_{i}^{T}(\hat{\pmb {\beta }}-\pmb {\beta }_{0})\hat{\xi }_{ij}\). Noting that \(E[(Y-Z^{T}\pmb {\beta }_{0})\xi _{j}]=\gamma _{j}\lambda _{j}\), we have
where \(o_{p}(1)\) holds uniformly for \(j=1,\ldots ,m\). Since \(E(I_{1})=0\) and \(E(I_{1}^{2})\le \frac{1}{n}[\sum _{k=1}^{\infty }\gamma _{k} ^{2}E(\xi _{k}^{2}\xi _{j}^{2})+\sigma ^{2}\lambda _{j}]\le C\lambda _{j}/n\), we obtain that
Let \(M(t)=E[(Y_{i}-Z_{i}^{T}\pmb {\beta }_{0})X_{i}(t)] =\sum _{k=1}^{\infty }\gamma _{k}\lambda _{k}\phi _{k}(t)\). Then
Applying Assumption 1, it holds that
From (A.9), we obtain \(\sum _{j=1}^{m}\lambda _{j}^{-2}\Vert \hat{\phi }_{j}-\phi _{j}\Vert ^{2}=O_{p}(n^{-1}m^{3}\lambda _{m}^{-2} \log m)\). By arguments similar to those used in the proof of (5.15) of Hall and Horowitz (2007), it follows that
Hence, using the assumption that \(n^{-1}m^{2}\lambda _{m} ^{-1}\log m\rightarrow 0\), we obtain
Using Theorem 3.1, it holds that
Now Lemma A.6 follows from combining (A.26)–(A.29). \(\square \)
Proof of Theorem 2.2
Note that
and
Assumption 3 implies that \(m\sum _{j=1}^{m}\gamma _{j}^{2}\Vert \hat{\phi }_{j}-\phi _{j}\Vert ^{2}=O_{p}(mn^{-1}\sum _{j=1}^{m}\gamma _{j}^{2}j^{2}\log j)=o_{p}(m/n)\) and \(\sum _{j=m+1}^{\infty }\gamma _{j}^{2}=O(m^{-2\gamma +1})\). Now (2.10) follows from Lemma A.6, (A.30) and (A.31). The proof of Theorem 2.2 is finished. \(\square \)
Proof of Theorem 2.3
Observe that
where \(\Vert \hat{\gamma }-\gamma \Vert _{K}^{2}=\int _{{\mathcal {T}}}\int _{{\mathcal {T}} }K(s,t)[\hat{\gamma }(s)-\gamma (s)][\hat{\gamma }(t)-\gamma (t)]\mathrm{d}s\mathrm{d}t\). Under the assumptions of Theorem 2.3, using arguments similar to those used in the proof of Theorem 2 of Tang (2015), we deduce that \(\Vert \hat{\gamma }-\gamma \Vert _{K}^{2}=O_{p}(n^{-(\tau +2\delta -1)/(\tau +2\delta )})\). Now (2.12) follows from (A.32) and Theorem 2.1. The proof of Theorem 2.3 is finished. \(\square \)
Lemma A.7
Under the assumptions of Theorem 3.1, there exists a local minimizer \(\hat{\pmb {\beta }}\) of (3.1) such that \(\Vert \hat{ \pmb {\beta }}-\pmb {\beta }_{0}\Vert =O_{p}(n^{-1/2})\).
Proof
Let
and \(D_{n}(\pmb {\beta })=(\tilde{Y}-\tilde{Z}\pmb {\beta })^{T}(\tilde{Y}-\tilde{Z}\pmb {\beta })+P_{n}(\pmb {\beta })\). It suffices to prove that for any given \(\varepsilon >0\), there exists a constant C such that
Note that
and
By Lemma A.5, we have that \(n^{-1/2}\tilde{W}^{T}\tilde{Z}=o_{p}(1)\). By (A.25), it follows that \(n^{-1/2}\tilde{\varepsilon }^{T}\tilde{Z}=O_{p}(1)\). By Theorem 2.1, it holds that \(\pmb {\beta }^{(0)}\rightarrow _{P}\pmb {\beta }_{0}\), and we then have \(P\{P_{n1}( \pmb {\beta }_{01}+n^{-1/2}u_{1})-P_{n1}(\pmb {\beta }_{01})=0\}\rightarrow 1\) as \(n\rightarrow \infty \). Hence, for sufficiently large C, (A.33) follows from (A.34) and Lemma A.1 and the fact that \(\Omega \) is positive definite. The proof of Lemma A.7 is complete. \(\square \)
Proof of Theorem 3.1
We first prove that for any \(\pmb {\beta }=(\pmb {\beta }_{1}^{T},\pmb {\beta }_{2}^{T})^{T}\) in the neighborhood \( \Vert \pmb {\beta }-\pmb {\beta }_{0}\Vert =O(n^{-1/2})\) for sufficiently large n and \(\pmb {\beta } _{2}\ne \pmb {0}\), with probability tending to 1, we have
Observe that
By Lemma A.5, we have that \(n^{-1/2}\tilde{W}^{T}\tilde{Z}=o_{p}(1)\). By (A.25), it follows that \(n^{-1/2}\tilde{\varepsilon }^{T}\tilde{Z}=O_{p}(1)\). Hence, using Lemma A.1 and the fact that \(\Vert \pmb {\beta }_{2}\Vert =O(n^{-1/2})\) and \( n^{1/2}\nu _{n}\rightarrow +\,\infty \) and the result of Theorem 2.1, we deduce that with probability tending to 1, it holds that
By Lemma A.7 and (A.35), there exists a \( \sqrt{n}\)-consistent local minimizer \(\check{\pmb {\beta }}=(\check{ \pmb {\beta }}_{1},\pmb {0}^{T})^{T}\) of (3.1). Note that
where \(\hat{\pmb {\beta }}_\mathrm{PLS}=(\hat{\beta }_{\mathrm{PLS}1},\ldots ,\hat{\beta }_{PLSd})^{T}\). Write \(\tilde{Z}=(\tilde{\pmb {Z}}_{1},\tilde{\pmb {Z}}_{2})\). Since \(\hat{\pmb {\beta }}_\mathrm{PLS}\) is a minimizers of (3.1) and \(\check{\pmb {\beta }}\) is a local minimizer of (3.1), we then have that
By Lemma A.5, we have that \(n^{-1/2}\tilde{W}^{T}\tilde{\pmb {Z}}_{2}=o_{p}(1)\). By (A.25), it follows that \(n^{-1/2}\tilde{\varepsilon }^{T}\tilde{\pmb {Z}}_{2}=O_{p}(1)\). The fact that \(\pmb {\beta }_{0}-\check{\pmb {\beta }}=O_{p}(n^{-1/2})\) and Lemma A.1 imply that \(n^{-1/2}(\pmb {\beta }_{0}-\check{\pmb {\beta }})\tilde{Z}^{T}\tilde{\pmb {Z}}_{2}=O_{p}(1)\). If \(\hat{\pmb {\beta }}_\mathrm{PLS}\ne \check{ \pmb {\beta }}\), under the assumptions of Theorem 3.1, then by (A.36) and (A.37), we have \(D_{n}((\hat{\pmb {\beta }}_{\mathrm{PLS}1},\hat{\pmb {\beta }}_{\mathrm{PLS}2}))> D_{n}((\check{\pmb {\beta }}_{1},\pmb {0}))\). This is a contradiction to the fact that \(\hat{\pmb {\beta }}_\mathrm{PLS}\) is a minimizer of (3.1). So \(\hat{\pmb {\beta }}_{\mathrm{PLS}2}=0\) and \(\hat{\pmb {\beta }}_{\mathrm{PLS}1}=\check{\pmb {\beta }}_{1}\).
We now prove asymptotic normality part. Consider \(D_{n}((\pmb {\beta }_{1}, \pmb {0}))\) as a function of \(\pmb {\beta }_{1}\). Noting that with probability tending 1, \(\hat{\pmb {\beta }}_{\mathrm{PLS}1}\) is the \(\sqrt{n}\)-consistent minimizer of \(D_{n}((\pmb {\beta }_{1},\pmb {0}))\) and satisfies
Hence
By arguments similar to those used in the proof of (2.9), we can prove (3.2). The proof of Theorem 3.1 is finished. \(\square \)
Proof of Theorem 3.2
Similar to the proofs of Theorems 2.2 and 2.3, we can complete the proof of Theorem 3.2. \(\square \)
Rights and permissions
About this article
Cite this article
Tang, Q., Jin, P. Estimation and variable selection for partial functional linear regression. AStA Adv Stat Anal 103, 475–501 (2019). https://doi.org/10.1007/s10182-018-00342-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10182-018-00342-0