Abstract
This paper considers estimation of a functional partially quantile regression model whose parameters include the infinite dimensional function as well as the slope parameters. We show asymptotical normality of the estimator of the finite dimensional parameter, and derive the rate of convergence of the estimator of the infinite dimensional slope function. In addition, we show the rate of the mean squared prediction error for the proposed estimator. A simulation study is provided to illustrate the numerical performance of the resulting estimators.
Similar content being viewed by others
References
Aneiros-Pérez G, Vieu P (2006) Semi-functional partial linear regression. Stat Prob Lett 76:1102–1110
Cardot H, Crambes C, Sarda P (2005) Quantile regression when the covariates are functions. J Nonparametr Stat 17:841–856
Engle R, Granger C, Rice J, Weiss A (1986) Semiparametric estimates of the relation between weather and electricity sales. J Am Stat Assoc 81:310–320
Fan YQ, Li Q (1999) Root-n-consistent estimation of partially linear time series models. J Nonparametr Stat 11:251–269
Gao JT (1995) Asymptotic theory for partly linear models. Commun Stat Theory Methods 24:1985–2009
Hall P, Horowitz JL (2007) Methodology and convergence rates for functional linear regression. Ann Stat 35:70–91
Härdle W, Liang H, Gao JT (2000) Partially linear models. Physica Verlag, Heidelberg
He X, Liang H (2000) Quantile regression estimates for a class of linear and partially linear errors-in-variables models. Statistica Sinica 1:129–140
He X, Shi PD (1996) Bivariate tensor-product B-spline in a partly linear model. J Multivar Anal 58:162–181
He X, Zhu ZY, Fung WK (2002) Estimation in a semiparametric model for longitudinal data with unspecified dependence structure. Biometrika 89:579–590
Koenker R (2005) Quantile regression. Cambridge University Press, Cambridge
Koenker R, Bassett G (1978) Regression quantiles. Econometrica 46:33–51
Liu Q (2011) Asymptotic normality for the partially linear EV models with longitudinal data. Commun Stat Theory Methods 40:1149–1158
Mammen E, Geer S (1997) Penalized quasi-likelihood estimation in partial linear models. Ann Stat 25:1014–1035
Moyeed RA, Diggle PJ (1994) Rate of convergence in semiparametric modeling of longitudinal data. Aust J Stat 36:75–93
Ramsay J, Silverman B (2005) Functional data analysis, 2nd edn. Springer, New York
Shi P, Li G (1994) On the rate of convergence of minimum \(L_1\)-norm estimates in a partly linear model. Commun Stat Theory Methods 23:175–196
Shin H (2009) Partial functional linear regression. J Stat Plan Inference 139:3405–3418
Speckman P (1988) Kernel smoothing in partial linear models. J R Stat Soc Ser B 50:413–436
Stone CJ (1985) Additive regression and other nonparametric models. Ann Stat 13:689–705
Wang H, Zhu ZY, Zhou JH (2009) Quantile regression in partially linear varying coefficient models. Ann Stat 37:3841–3866
Zhang X, Liang H (2011) Focused information criterion and model averaging for generalized additive partial linear models. Ann Stat 39:174–200
Acknowledgments
The authors would like to thank the referees for their helpful comments that led an improvement of an early manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Jiang Du’s work is supported by the National Natural Science Foundation of China (No. 11271039, No. 11101015, No. 11261025), the specialised research fund for the doctoral program of higher education (No. 20091103120012) and the fund from the government of Beijing (No. 2011D005015000007). Sun’s research was supported by grants from the Foundation of Academic Discipline Program at Central University of Finance and Economics and National Statistics Research Projects in 2012.
Appendix
Appendix
Let \({{\varvec{Z}}}=({{\varvec{Z}}}_1^T,\ldots , {{\varvec{Z}}}^T_n)^T\) and \({{\varvec{U}}}=({{\varvec{U}}}_1,\ldots , {{\varvec{U}}}_n)^T\) be the \(n\) by \(p\) design matrix and the \(n\) by \(m\) design matrix for the constant slope parametric and the functional slope parametric component, respectively. Also let \({{\varvec{P}}}= {{\varvec{U}}}({{\varvec{U}}}^T{{\varvec{U}}})^{-1}{{\varvec{U}}}^T, {{\varvec{Z}}}^*=({{\varvec{I}}}-{{\varvec{P}}}){{\varvec{Z}}}\), \({{\varvec{Z}}}^* =({{\varvec{Z}}}^*_1,\ldots ,{{\varvec{Z}}}^*_n)^T\), \({{\varvec{S}}}_n={{{\varvec{Z}}}^*}^T{{\varvec{Z}}}^*\).
Lemma 1
Under conditions C1–C5, one has
Proof
Let \({\varvec{\eta }}=({\varvec{\eta }}_1,\ldots ,{\varvec{\eta }}_n)^T\) and \({{\varvec{Z}}} = ({{\varvec{Z}}}- {\varvec{\Delta }}) +{\varvec{\Delta }}={\varvec{\eta }}+{\varvec{\Delta }}\), where \({\varvec{\Delta }}=(\langle {{\varvec{g}}}, X_1 \rangle ,\ldots \langle {{\varvec{g}}}, X_n \rangle )^T\) and \({{\varvec{g}}}=(g_1,\ldots ,g_p)^T\) is defined in Condition C5. We can write
where
Invoking the central limit theorem, one has
Invoking (7), to prove Lemma 1, it is enough to show that
By Condition C2, the fact \(\Vert \phi _{j}-\hat{\phi }_{j}\Vert ^2=O_p(n^{-1}j^2)\) and the Karhunen-Loève representation, one has
where \(\lambda _{lj}^{^{\prime }}=\langle \hat{\phi }_{j},g_{l}\rangle \) and \(\lambda _{lj}=\langle \phi _{j},g_{l}\rangle , l=1,\ldots ,p.\)
By Condition C3, there exists a matrix \({{\varvec{M}}}\) such that \(\Vert {\varvec{\Delta }}-{{\varvec{U}}}{{\varvec{M}}}\Vert ^{2}=O_p\left( n^{a_1/(a+2b)}\right) \), where \(a_1=\max (3, a+1)\). In addition, as \({{\varvec{P}}}\) is a projection matrix, we have
By Condition C4 and the strong law of large numbers, one has \(\frac{1}{n}{\varvec{\eta }}^T{\varvec{\eta }}\) converges almost surely to \(\Sigma \). For \(k\ne l\), one has
In addition, as \({{\varvec{P}}}\) is a projection matrix, this expression is \(O(m)\). Since \({{\varvec{P}}}\) is a positive semidefine matrix, when \(k=l\),
Invoking (9) and (10), we have
Similarly, we have
Invoking (11) and (12) and Condition C2, we have
and
The proof is hence complete. \(\square \)
Lemma 2
Under conditions C1–C5, we have
where \({ {\varvec{\psi }}({\varvec{\varepsilon }})}=(\psi (\varepsilon _{1}),...,\psi (\varepsilon _{n}))^{T}.\)
Proof
Invoking \({{\varvec{Z}}}={\varvec{\eta }}+{\varvec{\Delta }},\) we have
By Lemma 1, (8) and (11), we have
Thus,
By condition C5 and the central limit theorem, one has
\(\square \)
Proof of Theorem 1
Let
where \({{\varvec{H}}}_{m}=m{{\varvec{U}}}^{T}{{\varvec{U}}}\). Let \(\hat{{\varvec{\xi }}}={\varvec{\xi }}(\hat{{\varvec{\theta }}},\hat{{\varvec{\gamma }}})=(\hat{{\varvec{\xi }}}_{1}^{T},\hat{{\varvec{\xi }}}_{2}^{T})^{T}.\) Now, we show that \(\Vert \hat{{\varvec{\xi }}}\Vert =O_{p}({\varvec{\delta }}_{n}).\) To do so, let \(\tilde{{{\varvec{Z}}}}_{i}=\frac{1}{f(0)}{{\varvec{S}}}_{n}^{-\frac{1}{2}}{{\varvec{Z}}}_{i},\) \(\tilde{{{\varvec{U}}}}_{i}={{\varvec{H}}}_{m}^{-1}{{\varvec{U}}}_{i}, R_{i}=\sum _{j=1}^{m}\langle x_{i},\hat{\phi }_{j}\rangle \gamma _{j0}-\int \nolimits _{0}^{1}\beta (t)x(t)dt\).
Note that \(\Vert \phi _{j}-\hat{\phi }_{j}\Vert ^2=O_p(n^{-1}j^2)\), one has
Thus, one has
which is minimized at \(\hat{{\varvec{\xi }}}\).
By similar arguments to these of Lemma 1 of Cardot et al. (2005) for any \(\kappa >0\), there exists \(L_{\kappa }\) such that
On the other hand, we have
Thus, we have
Then connecting this with Eq. (13), we obtain
Thus, \(\Vert \hat{{\varvec{\xi }}}\Vert =O_{p}(\delta _{n}).\) This together with Lemma 1, and the definition of \(\hat{{\varvec{\xi }}}\), one has
Note that
Now we consider \(K_{n1}\), by the fact that the sequences \(\{\hat{\phi }_j\}\) forms an orthonormal basis in \(L^2([0, 1])\), one has
By Lemma 1 of Stone (1985), it is easy to show that \({{\varvec{H}}}_{m}\) is positive definite for sufficiently large \(n\). Therefore, one has
As a result, we have \( K_{n1}=O_{p}\left( \delta _{n}^{2}\right) \).
Therefore, one has
Next we will show the asymptotic normality of \(\hat{{\varvec{\theta }}}\), let \({\varvec{\xi }}_{1}^{*}=\frac{1}{f(0)}{{\varvec{S}}}_{n}^{-\frac{1}{2}}\sum _{i=0}^{n}{{\varvec{Z}}}_{i}^{*}\psi _{\tau }(\varepsilon _{i})\), according to Lemmas 1 and 2, \({\varvec{\xi }}_{1}^{*}\) is asymptotically normal with variance-covariance \(\frac{\tau (1-\tau )}{f^{2}(0)}I_{p}\).
On the other hand, similar to He and Shi (1996), we can proof that \(\Vert \hat{{\varvec{\xi }}}_{1}^{*}-\hat{{\varvec{\xi }}}_{1}\Vert =o_{p}(1).\) Thus,
Obviously,
This completes the proof of Theorem 1.
Rights and permissions
About this article
Cite this article
Lu, Y., Du, J. & Sun, Z. Functional partially linear quantile regression model. Metrika 77, 317–332 (2014). https://doi.org/10.1007/s00184-013-0439-7
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-013-0439-7