Skip to main content
Log in

Functional partially linear quantile regression model

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

This paper considers estimation of a functional partially quantile regression model whose parameters include the infinite dimensional function as well as the slope parameters. We show asymptotical normality of the estimator of the finite dimensional parameter, and derive the rate of convergence of the estimator of the infinite dimensional slope function. In addition, we show the rate of the mean squared prediction error for the proposed estimator. A simulation study is provided to illustrate the numerical performance of the resulting estimators.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Aneiros-Pérez G, Vieu P (2006) Semi-functional partial linear regression. Stat Prob Lett 76:1102–1110

    Article  MATH  Google Scholar 

  • Cardot H, Crambes C, Sarda P (2005) Quantile regression when the covariates are functions. J Nonparametr Stat 17:841–856

    Article  MATH  MathSciNet  Google Scholar 

  • Engle R, Granger C, Rice J, Weiss A (1986) Semiparametric estimates of the relation between weather and electricity sales. J Am Stat Assoc 81:310–320

    Article  Google Scholar 

  • Fan YQ, Li Q (1999) Root-n-consistent estimation of partially linear time series models. J Nonparametr Stat 11:251–269

    Article  MATH  MathSciNet  Google Scholar 

  • Gao JT (1995) Asymptotic theory for partly linear models. Commun Stat Theory Methods 24:1985–2009

    Article  MATH  Google Scholar 

  • Hall P, Horowitz JL (2007) Methodology and convergence rates for functional linear regression. Ann Stat 35:70–91

    Article  MATH  MathSciNet  Google Scholar 

  • Härdle W, Liang H, Gao JT (2000) Partially linear models. Physica Verlag, Heidelberg

    Book  MATH  Google Scholar 

  • He X, Liang H (2000) Quantile regression estimates for a class of linear and partially linear errors-in-variables models. Statistica Sinica 1:129–140

    MathSciNet  Google Scholar 

  • He X, Shi PD (1996) Bivariate tensor-product B-spline in a partly linear model. J Multivar Anal 58:162–181

    Article  MATH  MathSciNet  Google Scholar 

  • He X, Zhu ZY, Fung WK (2002) Estimation in a semiparametric model for longitudinal data with unspecified dependence structure. Biometrika 89:579–590

    Article  MATH  MathSciNet  Google Scholar 

  • Koenker R (2005) Quantile regression. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Koenker R, Bassett G (1978) Regression quantiles. Econometrica 46:33–51

    Article  MATH  MathSciNet  Google Scholar 

  • Liu Q (2011) Asymptotic normality for the partially linear EV models with longitudinal data. Commun Stat Theory Methods 40:1149–1158

    Article  MATH  Google Scholar 

  • Mammen E, Geer S (1997) Penalized quasi-likelihood estimation in partial linear models. Ann Stat 25:1014–1035

    Article  MATH  Google Scholar 

  • Moyeed RA, Diggle PJ (1994) Rate of convergence in semiparametric modeling of longitudinal data. Aust J Stat 36:75–93

    Article  MATH  MathSciNet  Google Scholar 

  • Ramsay J, Silverman B (2005) Functional data analysis, 2nd edn. Springer, New York

    Google Scholar 

  • Shi P, Li G (1994) On the rate of convergence of minimum \(L_1\)-norm estimates in a partly linear model. Commun Stat Theory Methods 23:175–196

    Article  MATH  MathSciNet  Google Scholar 

  • Shin H (2009) Partial functional linear regression. J Stat Plan Inference 139:3405–3418

    Article  MATH  Google Scholar 

  • Speckman P (1988) Kernel smoothing in partial linear models. J R Stat Soc Ser B 50:413–436

    MATH  MathSciNet  Google Scholar 

  • Stone CJ (1985) Additive regression and other nonparametric models. Ann Stat 13:689–705

    Article  MATH  Google Scholar 

  • Wang H, Zhu ZY, Zhou JH (2009) Quantile regression in partially linear varying coefficient models. Ann Stat 37:3841–3866

    Article  MATH  MathSciNet  Google Scholar 

  • Zhang X, Liang H (2011) Focused information criterion and model averaging for generalized additive partial linear models. Ann Stat 39:174–200

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the referees for their helpful comments that led an improvement of an early manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jiang Du.

Additional information

Jiang Du’s work is supported by the National Natural Science Foundation of China (No. 11271039, No. 11101015, No. 11261025), the specialised research fund for the doctoral program of higher education (No. 20091103120012) and the fund from the government of Beijing (No. 2011D005015000007). Sun’s research was supported by grants from the Foundation of Academic Discipline Program at Central University of Finance and Economics and National Statistics Research Projects in 2012.

Appendix

Appendix

Let \({{\varvec{Z}}}=({{\varvec{Z}}}_1^T,\ldots , {{\varvec{Z}}}^T_n)^T\) and \({{\varvec{U}}}=({{\varvec{U}}}_1,\ldots , {{\varvec{U}}}_n)^T\) be the \(n\) by \(p\) design matrix and the \(n\) by \(m\) design matrix for the constant slope parametric and the functional slope parametric component, respectively. Also let \({{\varvec{P}}}= {{\varvec{U}}}({{\varvec{U}}}^T{{\varvec{U}}})^{-1}{{\varvec{U}}}^T, {{\varvec{Z}}}^*=({{\varvec{I}}}-{{\varvec{P}}}){{\varvec{Z}}}\), \({{\varvec{Z}}}^* =({{\varvec{Z}}}^*_1,\ldots ,{{\varvec{Z}}}^*_n)^T\), \({{\varvec{S}}}_n={{{\varvec{Z}}}^*}^T{{\varvec{Z}}}^*\).

Lemma 1

Under conditions C1–C5, one has

$$\begin{aligned} {\frac{1}{n}{{\varvec{S}}}_{n}}=\Sigma +o_p(1). \end{aligned}$$

Proof

Let \({\varvec{\eta }}=({\varvec{\eta }}_1,\ldots ,{\varvec{\eta }}_n)^T\) and \({{\varvec{Z}}} = ({{\varvec{Z}}}- {\varvec{\Delta }}) +{\varvec{\Delta }}={\varvec{\eta }}+{\varvec{\Delta }}\), where \({\varvec{\Delta }}=(\langle {{\varvec{g}}}, X_1 \rangle ,\ldots \langle {{\varvec{g}}}, X_n \rangle )^T\) and \({{\varvec{g}}}=(g_1,\ldots ,g_p)^T\) is defined in Condition C5. We can write

$$\begin{aligned} {{\varvec{S}}}_{n}&= ( ({{\varvec{I}}}-{{\varvec{P}}}){{\varvec{Z}}})^{T}(({{\varvec{I}}}-{{\varvec{P}}}){{\varvec{Z}}}) \\&= ({\varvec{\eta }}+{\varvec{\Delta }})^{T}({{\varvec{I}}}-{{\varvec{P}}})^{T}({{\varvec{I}}}-{{\varvec{P}}})({\varvec{\eta }}+{\varvec{\Delta }})\\&= {\varvec{\eta }}^{T}{\varvec{\eta }}+{\varvec{\Delta }}^{T}({{\varvec{I}}}-{{\varvec{P}}})^{T}({{\varvec{I}}}-{{\varvec{P}}}){\varvec{\Delta }}+{\varvec{\Delta }}^{T}({{\varvec{I}}}-{{\varvec{P}}})^{T}({{\varvec{I}}}-{{\varvec{P}}}){\varvec{\eta }}\\&\quad +{\varvec{\eta }}^{T}({{\varvec{I}}}-{{\varvec{P}}})^{T}({{\varvec{I}}}-{{\varvec{P}}}) {\varvec{\Delta }}-{\varvec{\eta }}^{T}{{\varvec{P}}}^{T}{{\varvec{P}}}{\varvec{\eta }}\\&= {\varvec{\eta }}^{T}{\varvec{\eta }}+I_{n1}+I_{n2}+I_{n3}+I_{n4}, \end{aligned}$$

where

$$\begin{aligned}&I_{n1}={\varvec{\Delta }}^{T}({{\varvec{I}}}-{{\varvec{P}}})^{T}({{\varvec{I}}}-{{\varvec{P}}}){\varvec{\Delta }},\\&I_{n2}={\varvec{\eta }}^{T}({{\varvec{I}}}-{{\varvec{P}}})^{T}({{\varvec{I}}}-{{\varvec{P}}})\Delta ,\\&I_{n3}={\varvec{\Delta }}^{T}({{\varvec{I}}}-{{\varvec{P}}})^{T}({{\varvec{I}}}-{{\varvec{P}}}){\varvec{\eta }},\\&I_{n4}={\varvec{\eta }}^{T}{{\varvec{P}}}^{T}{{\varvec{P}}}{\varvec{\eta }}. \end{aligned}$$

Invoking the central limit theorem, one has

$$\begin{aligned} \frac{1}{n}{\varvec{\eta }}^{T}{\varvec{\eta }}\rightarrow \Sigma , \quad a.s. \end{aligned}$$
(7)

Invoking (7), to prove Lemma 1, it is enough to show that

$$\begin{aligned} \Vert I_{nl}\Vert =o_{p}(n),\quad l=1,2,3,4. \end{aligned}$$

By Condition C2, the fact \(\Vert \phi _{j}-\hat{\phi }_{j}\Vert ^2=O_p(n^{-1}j^2)\) and the Karhunen-Loève representation, one has

$$\begin{aligned} \Vert {\varvec{\Delta }}-{{\varvec{U}}}{{\varvec{M}}}\Vert ^{2}&= \sum _{i=1}^{n}\sum _{l=1}^{p}\left\| \langle x_{i},g_{l}\rangle -\sum _{j=1}^{m}\langle x_{i},\hat{\phi }_{j}\rangle \lambda _{lj}^{^{\prime }}\right\| ^{2} \\&= \sum _{i=1}^{n}\sum _{l=1}^{p}\left\| \sum _{j=1}^{\infty }\phi _{j}\lambda _{lj}-\sum _{j=1}^{m}\hat{\phi }_{j}\lambda _{lj}^{^{\prime }}\right\| ^{2}\\&= \sum _{i=1}^{n}\sum _{l=1}^{p}\left\| \sum _{j=m+1}^{\infty }\phi _{j}\lambda _{lj}+\sum _{j=1}^{m}\left( \phi _{j}\lambda _{lj}-\hat{\phi }_{j}\lambda _{lj}^{^{\prime }}\right) \right\| ^{2}\\&\le 2\sum _{i=1}^{n}\sum _{l=1}^{p}\sum _{j=m+1}^{\infty }\left\| \langle x_{i},\phi _{j}\rangle \lambda _{lj}\Vert ^{2}\!+\!2\sum _{i=1}^{n}\sum _{l=1}^{p}\Vert \sum _{j=1}^{m}\left( \phi _{j}\lambda _{lj}\!-\! \hat{\phi }_{j}\lambda _{lj}^{^{\prime }}\right) \right\| ^{2}\\&\le 2\sum _{i=1}^{n}\sum _{l=1}^{p}\sum _{j=m+1}^{\infty }j^{-2b}\!+\!2\sum _{i=1}^{n}\sum _{l=1}^{p}\sum _{j=1}^{m}\Vert \phi _{j}(\lambda _{lj}-\lambda _{lj}^{^{\prime }})\!+\!(\phi _{j}\!-\!\hat{\phi }_{j})\lambda _{lj}^{^{\prime }}\Vert ^2\\&= O_{p}\left( n^{-\frac{2b-1}{a+2b}}\right) n+O_{p}\left( n\sum _{j=1}^{m} \Vert \phi _{j}-\hat{\phi }_{j}\Vert ^{2}\right) \\&= O_{p}\left( n^{-\frac{2b-1}{a+2b}}\right) n+O_{p}\left( n\sum _{j=1}^{m}n^{-1}j^2\right) \\&= O_{p}\left( nn^{-\frac{2b-1}{a+2b}}\right) +O_{p}\left( \frac{m(2m+1)(m+1)}{6}\right) \\&= O_{p}\left( n^{\frac{a+1}{a+2b}}\right) +O_{p}(m^{3}), \end{aligned}$$

where \(\lambda _{lj}^{^{\prime }}=\langle \hat{\phi }_{j},g_{l}\rangle \) and \(\lambda _{lj}=\langle \phi _{j},g_{l}\rangle , l=1,\ldots ,p.\)

By Condition C3, there exists a matrix \({{\varvec{M}}}\) such that \(\Vert {\varvec{\Delta }}-{{\varvec{U}}}{{\varvec{M}}}\Vert ^{2}=O_p\left( n^{a_1/(a+2b)}\right) \), where \(a_1=\max (3, a+1)\). In addition, as \({{\varvec{P}}}\) is a projection matrix, we have

$$\begin{aligned} \Vert ({{\varvec{I}}}-{{\varvec{P}}}){\varvec{\Delta }}\Vert ^{2}&= \Vert {\varvec{\Delta }} -{{\varvec{U}}}{{\varvec{M}}} \Vert ^{2}+\Vert {{\varvec{U}}}{{\varvec{M}}} - {{\varvec{P}}} {\varvec{\Delta }}\Vert ^{2}\nonumber \\&\le 2\Vert {\varvec{\Delta }} -{{\varvec{U}}}{{\varvec{M}}} \Vert ^{2}=O_p\left( n^{a_1/(a+2b)}\right) . \end{aligned}$$
(8)

By Condition C4 and the strong law of large numbers, one has \(\frac{1}{n}{\varvec{\eta }}^T{\varvec{\eta }}\) converges almost surely to \(\Sigma \). For \(k\ne l\), one has

$$\begin{aligned} E\left\{ ({\varvec{\eta }}^T{{\varvec{P}}}{\varvec{\eta }})_{kl}\right\} ^2&= (\Sigma _{kl})^2(\text{ trace }({{\varvec{P}}}))^2+\left\{ \Sigma _{kk}\Sigma _{ll}+(\Sigma _{kl})^2\right\} \text{ trace }({{\varvec{P}}}{{\varvec{P}}}^{T})\nonumber \\&\quad +\left\{ E[{\varvec{\eta }}_{1k}^2{\varvec{\eta }}_{1l}^2]-2(\Sigma _{kl})^2-\Sigma _{kk} \Sigma _{ll}\right\} \sum _{s}{{\varvec{P}}}_{ss}. \end{aligned}$$
(9)

In addition, as \({{\varvec{P}}}\) is a projection matrix, this expression is \(O(m)\). Since \({{\varvec{P}}}\) is a positive semidefine matrix, when \(k=l\),

$$\begin{aligned} E\left\{ ({\varvec{\eta }}^T{{\varvec{P}}}{\varvec{\eta }})_{kk}\right\} =\Sigma _{kk}\text{ trace }({{\varvec{P}}})=O(m). \end{aligned}$$
(10)

Invoking (9) and (10), we have

$$\begin{aligned} \Vert {{\varvec{P}}}{\varvec{\eta }}\Vert ^{2}=O_{p}(m). \end{aligned}$$
(11)

Similarly, we have

$$\begin{aligned} \Vert ({{\varvec{I}}}-{{\varvec{P}}}){\varvec{\eta }}\Vert ^{2}=O_{p}(m). \end{aligned}$$
(12)

Invoking (11) and (12) and Condition C2, we have

$$\begin{aligned} \Vert I_{n1}\Vert = \Vert ({{\varvec{I}}}-{{\varvec{P}}}){\varvec{\Delta }}\Vert ^{2}&= O_p\left( n^{a_1/(a+2b)}\right) =o_{p}(n),\\ \Vert I_{n2}\Vert = \Vert I_{n3}\Vert&= \Vert {\varvec{\eta }}^{T}\left( {{\varvec{I}}}-{{\varvec{P}}}\right) ^{T}\left( {{\varvec{I}}}-{{\varvec{P}}}\right) {\varvec{\Delta }}\Vert ^{2}\\&\le \Vert ({{\varvec{I}}}-{{\varvec{P}}}){\varvec{\eta }}\Vert ^{2}\Vert ({{\varvec{I}}}-{{\varvec{P}}}){\varvec{\Delta }}\Vert ^{2}\\&= O_{p}(m)O_{p}\left( n^{\frac{a_1}{a+2b}}\right) \\&= o_{p}(n), \end{aligned}$$

and

$$\begin{aligned} \Vert I_{n4}\Vert = \Vert {\varvec{\eta }}^{T}{{\varvec{P}}}^{T}{{\varvec{P}}}{\varvec{\eta }}\Vert ^{2} =O_{p}(m)=o_{p}(n). \end{aligned}$$

The proof is hence complete. \(\square \)

Lemma 2

Under conditions C1–C5, we have

$$\begin{aligned} {{\varvec{S}}}_{n}^{-\frac{1}{2}}{{\varvec{Z}}}^{*T}{ {\varvec{\psi }}({\varvec{\varepsilon }})}\rightarrow N(0,\tau (1-\tau )I_{p}), \end{aligned}$$

where \({ {\varvec{\psi }}({\varvec{\varepsilon }})}=(\psi (\varepsilon _{1}),...,\psi (\varepsilon _{n}))^{T}.\)

Proof

Invoking \({{\varvec{Z}}}={\varvec{\eta }}+{\varvec{\Delta }},\) we have

$$\begin{aligned} {{\varvec{S}}}_{n}^{-\frac{1}{2}}{{\varvec{Z}}}^{*T}{\varvec{\psi }}({\varvec{\varepsilon }})&= {{\varvec{S}}}_{n}^{-\frac{1}{2}}(({{\varvec{I}}}-{{\varvec{P}}}){{\varvec{Z}}})^{T}{\varvec{\psi }}({\varvec{\varepsilon }})\\&= {{\varvec{S}}}_{n}^{-\frac{1}{2}}(({{\varvec{I}}}-{{\varvec{P}}})({\varvec{\eta }}+{\varvec{\Delta }}))^{T}{\varvec{\psi }}({\varvec{\varepsilon }})\\&= {{\varvec{S}}}_{n}^{-\frac{1}{2}}{\varvec{\eta }}^{T}{\varvec{\psi }}({\varvec{\varepsilon }})-{{\varvec{S}}}_{n}^{-\frac{1}{2}}({{\varvec{P}}}{\varvec{\eta }})^{T}{\varvec{\psi }}({\varvec{\varepsilon }})+{{\varvec{S}}}_{n}^{-\frac{1}{2}}(({{\varvec{I}}}-{{\varvec{P}}}){\varvec{\Delta }})^{T}{\varvec{\psi }}({\varvec{\varepsilon }}). \end{aligned}$$

By Lemma 1, (8) and (11), we have

$$\begin{aligned} \Vert {{\varvec{S}}}_{n}^{-\frac{1}{2}}({{\varvec{P}}}{\varvec{\eta }})^{T}\Vert ^2=o_{p}(1)\quad \text{ and } \quad \Vert {{\varvec{S}}}_{n}^{-\frac{1}{2}}(({{\varvec{I}}}-{{\varvec{P}}}){\varvec{\Delta }})^{T}{\varvec{\psi }}({\varvec{\varepsilon }})\Vert ^2=o_{p}(1). \end{aligned}$$

Thus,

$$\begin{aligned} {{\varvec{S}}}_{n}^{-\frac{1}{2}}{{\varvec{Z}}}^{*T}{\varvec{\psi }}({\varvec{\varepsilon }})={{\varvec{S}}}_{n}^{-\frac{1}{2}}{\varvec{\eta }}^{T}{\varvec{\psi }}({\varvec{\varepsilon }})+o_{p}(1). \end{aligned}$$

By condition C5 and the central limit theorem, one has

$$\begin{aligned} {{\varvec{S}}}_{n}^{-\frac{1}{2}}{{\varvec{Z}}}^{*T}{\varvec{\psi }}({\varvec{\varepsilon }})\rightarrow N\left( 0,\tau (1-\tau )I_{p}\right) \!. \end{aligned}$$

\(\square \)

Proof of Theorem 1

Let

$$\begin{aligned} {\varvec{\xi }}\left( \begin{array}{c} {\varvec{\theta }} \\ {\varvec{\gamma }} \end{array} \right) =\left( \begin{array}{c} {\varvec{\xi }}_{1}\\ {\varvec{\xi }}_{2} \end{array} \right) =\left( \begin{array}{c} f(0)S_{n}^{\frac{1}{2}}({\varvec{\theta }}-{\varvec{\theta }}_{0}) \\ H_{m}^{\frac{1}{2}}({\varvec{\gamma }}-{\varvec{\gamma }}_{m})+{{\varvec{H}}}_{m}^{-\frac{1}{2}}{{\varvec{U}}}{{\varvec{Z}}}(\theta -\theta _{0}) \end{array} \right) , \end{aligned}$$

where \({{\varvec{H}}}_{m}=m{{\varvec{U}}}^{T}{{\varvec{U}}}\). Let \(\hat{{\varvec{\xi }}}={\varvec{\xi }}(\hat{{\varvec{\theta }}},\hat{{\varvec{\gamma }}})=(\hat{{\varvec{\xi }}}_{1}^{T},\hat{{\varvec{\xi }}}_{2}^{T})^{T}.\) Now, we show that \(\Vert \hat{{\varvec{\xi }}}\Vert =O_{p}({\varvec{\delta }}_{n}).\) To do so, let \(\tilde{{{\varvec{Z}}}}_{i}=\frac{1}{f(0)}{{\varvec{S}}}_{n}^{-\frac{1}{2}}{{\varvec{Z}}}_{i},\) \(\tilde{{{\varvec{U}}}}_{i}={{\varvec{H}}}_{m}^{-1}{{\varvec{U}}}_{i}, R_{i}=\sum _{j=1}^{m}\langle x_{i},\hat{\phi }_{j}\rangle \gamma _{j0}-\int \nolimits _{0}^{1}\beta (t)x(t)dt\).

Note that \(\Vert \phi _{j}-\hat{\phi }_{j}\Vert ^2=O_p(n^{-1}j^2)\), one has

$$\begin{aligned} \Vert R_{i}\Vert ^{2}&= \left\| \sum _{j=1}^{m}\langle x_{i},\hat{\phi }_{j}\rangle \gamma _{j0}-\sum _{j=1}^{\infty }\langle x_{i},\phi _{j}\rangle \gamma _{j0}\right\| ^{2}\\&\le \left\| \sum _{j=1}^{m}\langle x_{i},\hat{\phi }_{j}\rangle \gamma _{j0}-\sum _{j=1}^{m}\langle x_{i},\phi _{j}\rangle \gamma _{j0}\right\| ^{2}+\left\| \sum _{j=m+1}^{\infty }\langle x_{i},\phi _{j}\rangle \gamma _{j0}\right\| ^{2}\\&\le \sum _{j=1}^{m}\left\| \hat{\phi }_{j}-\phi _{j}\right\| ^{2}|\gamma _{j0}|^{2}+\sum _{j=m+1}^{\infty }|\gamma _{j0}|^{2}\\&= \sum _{j=1}^{m}O_{p}\left( n^{-1} j^{2-2b}\right) +\sum _{j=m+1}^{\infty }j^{-2b}\\&= O_{p_{}}\left( n^{-\frac{2b-1}{a+2b}}\right) +O\left( n^{-(2b-1)/(a+2b)}\right) \\&= O_{p}\left( n^{-\frac{2b-1}{a+2b}}\right) \!. \end{aligned}$$

Thus, one has

$$\begin{aligned} \sum _{i=1}^{n}\rho _{\tau }(Y_{i}-{{\varvec{Z}}}_{i}^{T}{\varvec{\theta }}-{{\varvec{U}}}_{i}^{T}\gamma )=\sum _{i=1}^{n}\rho _{\tau }\left( \varepsilon _{i} -\tilde{{{\varvec{Z}}}}_{i}^{T}{\varvec{\xi }}_{1}-\tilde{{{\varvec{U}}}}_{i}^{T}{\varvec{\xi }}_{2}-R_{i}\right) \!, \end{aligned}$$

which is minimized at \(\hat{{\varvec{\xi }}}\).

By similar arguments to these of Lemma 1 of Cardot et al. (2005) for any \(\kappa >0\), there exists \(L_{\kappa }\) such that

$$\begin{aligned} P\left\{ \inf \limits _{\Vert {\varvec{\xi }}\Vert >L_{\kappa }\delta _{n}}\sum _{i=1}^{n}\rho _{\tau }\left( \varepsilon _{i} -\tilde{{{\varvec{Z}}}}_{i}^{T}{\varvec{\xi }}_{1}-\tilde{{{\varvec{U}}}}_{i}^{T}{\varvec{\xi }}_{2}-R_{i}\right) >\sum _{i=1}^{n}\rho _{\tau }(\varepsilon _{i}-R_{i})\right\} >1-\kappa . \end{aligned}$$

On the other hand, we have

$$\begin{aligned} \sum _{i=1}^{n}\rho _{\tau }\left( \varepsilon _{i} -\tilde{{{\varvec{Z}}}}_{i}^{T}\hat{{\varvec{\xi }}}_{1}-\tilde{{{\varvec{U}}}}_{i}^{T}\hat{{\varvec{\xi }}}_{2}-R_{i}\right) =\inf \limits _{{\varvec{\xi }}\in R^{m+p}} \rho _{\tau }\left( \varepsilon _{i} -\tilde{{{\varvec{Z}}}}_{i}^{T}{\varvec{\xi }}_{1}-\tilde{{{\varvec{U}}}}_{i}^{T}{\varvec{\xi }}_{2}-R_{i}\right) .\qquad \end{aligned}$$
(13)

Thus, we have

$$\begin{aligned} \sum _{i=1}^{n}\rho _{\tau }\left( \varepsilon _{i} -\tilde{{{\varvec{Z}}}}_{i}^{T}\hat{{\varvec{\xi }}}_{1}-\tilde{{{\varvec{U}}}}_{i}^{T}\hat{{\varvec{\xi }}}_{2}-R_{i}\right) <\sum _{i=1}^{n}\rho _{\tau }(\varepsilon _{i}-R_{i}). \end{aligned}$$

Then connecting this with Eq. (13), we obtain

$$\begin{aligned}&P\left\{ \inf \limits _{\Vert {\varvec{\xi }}\Vert >L_{\kappa }\delta _{n}}\sum _{i=1}^{n}\rho _{\tau }\left( \varepsilon _{i} -\tilde{{{\varvec{Z}}}}_{i}^{T}{\varvec{\xi }}_{1}-\tilde{{{\varvec{U}}}}_{i}^{T}{\varvec{\xi }}_{2}-R_{i}\right) \right. \\&\quad \left. >\sum _{i=1}^{n}\rho _{\tau }\left( \varepsilon _{i} -\tilde{{{\varvec{Z}}}}_{i}^{T}\hat{{\varvec{\xi }}}_{1}-\tilde{{{\varvec{U}}}}_{i}^{T}\hat{{\varvec{\xi }}}_{2}-R_{i}\right) \right\} >1-\kappa . \end{aligned}$$

Thus, \(\Vert \hat{{\varvec{\xi }}}\Vert =O_{p}(\delta _{n}).\) This together with Lemma 1, and the definition of \(\hat{{\varvec{\xi }}}\), one has

$$\begin{aligned} \Vert \hat{{\varvec{\theta }}}-{\varvec{\theta }}_{0}\Vert =\left\| \frac{1}{f(0)}{{\varvec{S}}}_{n}^{-\frac{1}{2}}\hat{{\varvec{\xi }}}_{1}\right\| =O_{p}\left( n^{-\frac{1}{2}}\Vert \hat{{\varvec{\xi }}}_{1}\Vert \right) =O_{p}(n^{-\frac{1}{2}}\delta _{n}). \end{aligned}$$

Note that

$$\begin{aligned} \Vert \hat{\beta }(t)-\beta _{0}(t)\Vert ^{2}&= \left\| \sum _{j=1}^{m}\hat{\gamma }_{j}\hat{\phi }_{j}-\sum _{j=1}^{\infty }\gamma _{j}\phi _{j}\right\| ^{2}\\&\le 2\left\| \sum _{j=1}^{m}\hat{\gamma }_{j}\hat{\phi }_{j}-\sum _{j=1}^{m}\gamma _{j}\phi _{j}\right\| ^{2} +2\left\| \sum _{j=m+1}^{\infty }\gamma _{j}\phi _{j}\right\| ^{2}\\&\le 4\left\| \sum _{j=1}^{m}(\hat{\gamma }_{j}-\gamma _{j})\hat{\phi }_{j}\right\| ^{2} +4\left\| \sum _{j=1}^{m}\gamma _{j}(\hat{\phi }_{j}-\phi _{j})\right\| ^{2} +2\sum _{j=m+1}^{\infty }\gamma _{j}^{2}\\&= K_{n1}+K_{n2}+K_{n3}. \end{aligned}$$

Now we consider \(K_{n1}\), by the fact that the sequences \(\{\hat{\phi }_j\}\) forms an orthonormal basis in \(L^2([0, 1])\), one has

$$\begin{aligned} K_{n1}&= \left\| \sum _{j=1}^{m}(\hat{\gamma }_{j}-\gamma _{j})\hat{\phi }_{j}\right\| ^{2}\\&\le \sum _{j=1}^{m}(\hat{\gamma }_{j}-\gamma _{j})^{2}\\&\le \Vert \hat{{\varvec{\gamma }}} -{{\varvec{\gamma }}_m}\Vert ^{2}. \end{aligned}$$

By Lemma 1 of Stone (1985), it is easy to show that \({{\varvec{H}}}_{m}\) is positive definite for sufficiently large \(n\). Therefore, one has

$$\begin{aligned} \Vert \hat{{\varvec{\gamma }}} -{{\varvec{\gamma }}_m}\Vert ^{2}&\le C(\hat{{\varvec{\gamma }}} -{{\varvec{\gamma }}_m})^TH_{m}(\hat{{\varvec{\gamma }}} -{{\varvec{\gamma }}_m})\\&\le O_{p}(n^{-1}\Vert {\hat{\varvec{\xi }}}_2\Vert ^2)+O_p(\Vert \hat{{\varvec{\theta }}}-{\varvec{\theta }}_0)\Vert ) =O_{p}\left( \delta _{n}^{2}\right) \!. \end{aligned}$$

As a result, we have \( K_{n1}=O_{p}\left( \delta _{n}^{2}\right) \).

$$\begin{aligned} K_{n2}&\le m\sum _{j=1}^{m}\Vert \hat{\phi }_{j}-\phi _{j}\Vert ^{2}\gamma _{j}^{2}\le n^{-1}m\sum _{j=1}^{m}j^{2}\gamma _{j}^{2}\\&= O_{p}\left( n^{-1}m\sum _{j=1}^{m}j^{2-2b}\right) =O_{p}\left( n^{-\frac{a+3b-1}{a+2b}}\right) =o_{p}\left( \delta _{n}^{2}\right) \!, \end{aligned}$$
$$\begin{aligned} K_{n3}=\sum _{j=m+1}^{\infty }\gamma _{j}^{2}\le C\sum _{j=m+1}^{\infty }j^{-2b}=O(n^{-\frac{2b-1}{a+2b}})=O\left( \delta _{n}^{2}\right) \!. \end{aligned}$$

Therefore, one has

$$\begin{aligned} \Vert \hat{\beta }-\beta \Vert ^{2}=O_{p}\left( \delta _{n}^{2}\right) \!. \end{aligned}$$

Next we will show the asymptotic normality of \(\hat{{\varvec{\theta }}}\), let \({\varvec{\xi }}_{1}^{*}=\frac{1}{f(0)}{{\varvec{S}}}_{n}^{-\frac{1}{2}}\sum _{i=0}^{n}{{\varvec{Z}}}_{i}^{*}\psi _{\tau }(\varepsilon _{i})\), according to Lemmas 1 and 2, \({\varvec{\xi }}_{1}^{*}\) is asymptotically normal with variance-covariance \(\frac{\tau (1-\tau )}{f^{2}(0)}I_{p}\).

On the other hand, similar to He and Shi (1996), we can proof that \(\Vert \hat{{\varvec{\xi }}}_{1}^{*}-\hat{{\varvec{\xi }}}_{1}\Vert =o_{p}(1).\) Thus,

$$\begin{aligned} \hat{{\varvec{\xi }}}_{1}=\hat{{\varvec{\xi }}}_{1}^{*}+o_{p}(1)=\frac{1}{f(0)}{{\varvec{S}}}_{n}^{-\frac{1}{2}}\sum _{i=0}^{n}Z_{i}^{*} \psi _{\tau }(\varepsilon _{i})+o_{p}(1). \end{aligned}$$

Obviously,

$$\begin{aligned} \sqrt{n}(\hat{{\varvec{\theta }}}_{0}-{\varvec{\theta }}_{0})\rightarrow N\left( 0,\frac{\tau (1-\tau )}{f^{2}(0)}\Sigma \right) . \end{aligned}$$

This completes the proof of Theorem 1.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lu, Y., Du, J. & Sun, Z. Functional partially linear quantile regression model. Metrika 77, 317–332 (2014). https://doi.org/10.1007/s00184-013-0439-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-013-0439-7

Keywords

Navigation