Skip to main content
Log in

Functional data analysis of generalized regression quantiles

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

Generalized regression quantiles, including the conditional quantiles and expectiles as special cases, are useful alternatives to the conditional means for characterizing a conditional distribution, especially when the interest lies in the tails. We develop a functional data analysis approach to jointly estimate a family of generalized regression quantiles. Our approach assumes that the generalized regression quantiles share some common features that can be summarized by a small number of principal component functions. The principal component functions are modeled as splines and are estimated by minimizing a penalized asymmetric loss measure. An iterative least asymmetrically weighted squares algorithm is developed for computation. While separate estimation of individual generalized regression quantiles usually suffers from large variability due to lack of sufficient data, by borrowing strength across data sets, our joint estimation approach significantly improves the estimation efficiency, which is demonstrated in a simulation study. The proposed method is applied to data from 159 weather stations in China to obtain the generalized quantile curves of the volatility of the temperature at these stations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Anastasiadou, Z., López-Cabrera, B.: Statistical modelling of temperature risk. SFB discussing paper 2012-029. Stat. Sci. (2012, submitted)

  • Benth, F., Benth, J., Koekebakker, S.: Putting a price on temperature. Scand. J. Stat. 34(4), 746–767 (2007)

    MATH  MathSciNet  Google Scholar 

  • Breckling, J., Chambers, R.: M-Quantiles. Biometrika 74(4), 761–772 (1988)

    Article  MathSciNet  Google Scholar 

  • Campbell, S., Diebold, F.: Weather forecasting for weather derivatives. J. Am. Stat. Assoc. 469, 6–16 (2005)

    Article  MathSciNet  Google Scholar 

  • Eilers, P., Marx, B.: Flexible smoothing with B-splines and penalties. J. Am. Stat. Assoc. 11, 89–121 (1996)

    MATH  MathSciNet  Google Scholar 

  • Guo, M., Härdle, W.: Simultaneous confidence bands for expectile functions. AStA Adv. Stat. Anal. 96, 517–542 (2012)

    Article  MathSciNet  Google Scholar 

  • Härdle, W., López-Cabrera, B.: Implied market price of weather risk. Appl. Math. Finance 5, 1–37 (2011)

    Google Scholar 

  • Härdle, W., Song, S.: Confidence bands in quantile regression. Econom. Theory 26(4), 1180–1200 (2010)

    Article  MATH  Google Scholar 

  • Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning, 2nd edn. Springer, New York (2009)

    Book  MATH  Google Scholar 

  • Hunter, D., Lange, K.: Quantile regression via an MM algorithm. J. Comput. Graph. Stat. 9(1), 60–77 (2000)

    MathSciNet  Google Scholar 

  • James, G., Hastie, T., Sugar, C.: Principal component models for sparse functional data. Biometrika 87, 587–602 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  • Jones, M.: Expectiles and M-quantiles are quantiles. Stat. Probab. Lett. 20, 149–153 (1994)

    Article  MATH  Google Scholar 

  • Koenker, R.: Quantile Regression. Econometric Society Monographs. Cambridge University Press, Cambridge (2005)

    Book  MATH  Google Scholar 

  • Koenker, R., Bassett, G.W.: Regression quantiles. Econometrica 46, 33–50 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  • Koenker, R., Machado, J.: Goodness of fit and related inference processes for quantile regression. J. Am. Stat. Assoc. 94, 1296–1310 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  • Nelder, J., Mead, R.: A simplex method for function minimization. Comput. J. 7, 308–313 (1965)

    Article  MATH  Google Scholar 

  • Newey, W.K., Powell, J.L.: Asymmetric least squares estimation and testing. Econometrica 55, 819–847 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  • Odening, M., Berg, E., Turvey, C.: Management of climate risk in agriculture. Agric. Financ. Rev. 68(1), 83–97 (2008). Special issue

    Article  Google Scholar 

  • Ramsay, J., Silverman, B.: Functional Data Analysis, 2nd edn. Springer, New York (2005)

    Google Scholar 

  • Ruppert, D., Wand, M., Carroll, R.: Semiparametric Regression. Cambridge University Press, Cambridge (2003)

    Book  MATH  Google Scholar 

  • Schnabel, S., Eilers, P.: An analysis of life expectancy and economic production using expectile frontier zones. Demogr. Res. 21, 109–134 (2009a)

    Article  Google Scholar 

  • Schnabel, S., Eilers, P.: Optimal expectile smoothing. Comput. Stat. Data Anal. 53, 4168–4177 (2009b)

    Article  MATH  MathSciNet  Google Scholar 

  • Taylor, J.: Estimating value at risk and expected shortfall using expectiles. J. Financ. Econom. 6, 231–252 (2008)

    Article  Google Scholar 

  • Yao, Q., Tong, H.: Asymmetric least squares regression estimation: a nonparametric approach. J. Nonparametr. Stat. 6(2–3), 273–292 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  • Zhou, L., Huang, J.Z., Carroll, R.: Joint modelling of paired sparse functional data using principle components. Biometrika 95(3), 601–619 (2008)

    Article  MATH  MathSciNet  Google Scholar 

  • Zhou, L., Huang, J.Z., Martinez, J.G., Maity, A., Baladandayuthapani, V., Carroll, R.J.: Reduced rank mixed effects models for spatially correlated hierarchical functional data. J. Am. Stat. Assoc. 105, 390–400 (2010)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianhua Z. Huang.

Additional information

Guo and Zhou made equal contributions to the paper. Zhou’s work was partially supported by NSF grant DMS-0907170. Huang’s work was partially supported by NSF grants DMS-0907170, DMS-1007618, DMS-1208952, and Award No. KUS-C1-016-04, made by King Abdullah University of Science and Technology (KAUST). Guo and Härdle were supported by CRC 649 “Economic Risk”, Deutsche Forschungsgemeinschaft.

Appendix

Appendix

1.1 A.1 The complete PLAWS algorithm

We give the complete algorithm here. The parameters that appear on the right hand side of the equations are all fixed at the values from the previous iteration.

  1. 1.

    Initialization the algorithm using the procedure described in Sect. A.2.

  2. 2.

    Update \(\widehat{\boldsymbol{\theta}}_{\mu}\) using

    $$ \begin{aligned}[c] \widehat{\boldsymbol{\theta}}_{\mu} &= \Biggl\{ \sum _{i=1}^{N}\mathbf {B}_{i}^{\top}\widehat{ \mathbf {W}}_{i}\mathbf {B}_{i}+\lambda_{\mu} {\boldsymbol {\varOmega }}\Biggr\} ^{-1} \\ &\quad {}\times\Biggl\{ \sum_{i=1}^{N} \mathbf {B}_{i}^{\top}\widehat{\mathbf {W}}_{i}(\mathbf {Y}_{i}- \mathbf {B}_{i}\widehat{{\boldsymbol {\varTheta }}}_{\psi}\widehat{{\boldsymbol {\alpha }}}_{i}) \Biggr\} . \end{aligned} $$
  3. 3.

    For l=1,…,K, update the l-th column of \(\widehat{{\boldsymbol {\varTheta }}}_{\psi}\) using

    $$ \begin{aligned}[c] \widehat{\boldsymbol{\theta}}_{\psi,l} &= \Biggl\{ \sum _{i=1}^{N} \widehat{{\boldsymbol {\alpha }}}_{il}^{2} \mathbf {B}_{i}^{\top}\widehat{\mathbf {W}}_{i}\mathbf {B}_{i}+ \lambda_{\psi} {\boldsymbol {\varOmega }}\Biggr\} ^{-1} \\ &\quad {}\times\Biggl\{ \sum _{i=1}^{N} \widehat{\alpha}_{il} \mathbf {B}_{i}^{\top}\widehat{\mathbf {W}}_{i}(\mathbf {Y}_{i}- \mathbf {B}_{i}\widehat{\boldsymbol {\theta}}_{\mu}-\mathbf {B}_{i} {\mathbf {q}}_{il}) \Biggr\} , \end{aligned} $$

    where \(\widehat{\boldsymbol{\theta}}_{\psi,k}\) is the k-th column of \(\widehat{{\boldsymbol {\varTheta }}}_{\psi}\), and

    $$\begin{aligned} {\mathbf {q}}_{il} =\sum_{k \neq l} \widehat{\boldsymbol{ \theta}}_{\psi ,k}\widehat{{\boldsymbol {\alpha }}}_{ik},\quad i=1, \ldots, N. \end{aligned}$$
  4. 4.

    Use the QR decomposition to orthonormalize the columns of \(\widehat{{\boldsymbol {\varTheta }}}_{\psi}\).

  5. 5.

    Update \((\widehat{{\boldsymbol {\alpha }}}_{1}, \dots, \widehat{{\boldsymbol {\alpha }}}_{N})\) using

    $$\widehat{{\boldsymbol {\alpha }}}_{i} = \bigl(\widehat{{\boldsymbol {\varTheta }}}_{\psi}^{\top} \mathbf {B}_{i}^{\top} \mathbf {W}_{i}\mathbf {B}_{i}\widehat{ {\boldsymbol {\varTheta }}}_{\psi} \bigr)^{-1} \ \bigl\{ \widehat{ {\boldsymbol {\varTheta }}}_{\psi}^{\top} \mathbf {B}_{i}^{\top} \mathbf {W}_{i}(\mathbf {Y}_{i}-\mathbf {B}_{i}\widehat{\boldsymbol{ \theta}}_{\mu}) \bigr\} , $$

    and then center \(\widehat{{\boldsymbol {\alpha }}}_{i}\) such that \(\sum_{i}^{N} \widehat{{\boldsymbol {\alpha }}}_{i}=0\).

  6. 6.

    Update the weights, defined in (21) for expectiles and (22) for quantiles.

  7. 7.

    Iterate Steps 2–6 until convergence is reached.

1.2 A.2 Initial values for the PLAWS algorithm

The following procedure is used for initialization of the PLAWS algorithm:

  1. 1.

    Estimate N expectile/quantile curves \(\widehat{l}_{i}(t)\) separately by applying the single curve estimation algorithm described in Sect. 2.

  2. 2.

    Set \(\widehat{\mathbf {l}}_{i} = \{\widehat{l}(t_{i1}), \dots, \widehat {l}(t_{iT_{i}})\}^{\top}\). Run the linear regression

    $$ \widehat{\mathbf {l}}_{i}= \mathbf {B}_{i}\boldsymbol{ \theta}_{\mu}+\boldsymbol {\varepsilon }_{i}, \quad i=1, \dots, N, $$
    (28)

    to get the initials of \(\widehat{\boldsymbol{\theta}}_{\mu}\) as follows

    $$\widehat{\boldsymbol{\theta}}_{\mu}^{(0)}= \Biggl(\sum _{i=1}^{N}\mathbf {B}_{i}^{\top} \mathbf {B}_{i} \Biggr)^{-1} \Biggl(\sum_{i=1}^{N} \mathbf {B}_{i}^{\top}\widehat{\mathbf {l}}_{i} \Biggr). $$

    Add a small ridge penalty if needed to overcome the singularity problem

  3. 3.

    Calculate the residuals of the regression (28), denoted as \(\widetilde{\mathbf {l}}_{i}= \widehat{\mathbf {l}}_{i}-\mathbf {B}_{i}\widehat {\boldsymbol{\theta}}_{\mu}^{(0)}\). For each i, run the following linear regression

    $$\begin{aligned} \widetilde{\mathbf {l}}_{i}= \mathbf {B}_{i}\boldsymbol {\varGamma }_{i}+ \boldsymbol {\varepsilon }_{i}. \end{aligned}$$

    The solution, denoted as \(\widehat{\boldsymbol {\varGamma }}_{i}^{(0)}\), is used in later steps for finding initials of Θ ψ and α i . Set \(\widehat{\boldsymbol {\varGamma }}_{0}=(\widehat{\boldsymbol {\varGamma }}_{1}^{(0)}, \ldots, \widehat{\boldsymbol {\varGamma }}_{N}^{(0)})\).

  4. 4.

    Calculate the singular value decomposition of \(\widehat{\boldsymbol {\varGamma }}^{(0)\top}\):

    $$\begin{aligned} \widehat{\boldsymbol {\varGamma }}^{(0)\top}= \mathbf {U}\mathbf {D}\mathbf {V}^{\top}. \end{aligned}$$

    The initial of Θ ψ is chosen as \(\widehat{{\boldsymbol {\varTheta }}}_{\psi }^{(0)} = \mathbf {V}_{k}\mathbf {D}_{k}\) where V k consists of the first K columns of V and D k is the corresponding K×K block of D.

  5. 5.

    Run the following regression

    $$ \widehat{\boldsymbol {\varGamma }}_{i}^{(0)}= \widehat{{\boldsymbol {\varTheta }}}_{\psi} \widehat {{\boldsymbol {\alpha }}}_{i} + \boldsymbol {\varepsilon }_{i} $$
    (29)

    to get the initials of \(\widehat{{\boldsymbol {\alpha }}}_{i}\); use a ridge penalty if the regression is singular. Center \(\widehat{{\boldsymbol {\alpha }}}_{i}\)’s such that \(\sum_{i=1}^{N} \widehat{{\boldsymbol {\alpha }}}_{i} = 0\).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Guo, M., Zhou, L., Huang, J.Z. et al. Functional data analysis of generalized regression quantiles. Stat Comput 25, 189–202 (2015). https://doi.org/10.1007/s11222-013-9425-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11222-013-9425-1

Keywords

Navigation