Skip to main content
Log in

Efficient parameter estimation via modified Cholesky decomposition for quantile regression with longitudinal data

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

It is well known that specifying a covariance matrix is difficult in the quantile regression with longitudinal data. This paper develops a two step estimation procedure to improve estimation efficiency based on the modified Cholesky decomposition. Specifically, in the first step, we obtain the initial estimators of regression coefficients by ignoring the possible correlations between repeated measures. Then, we apply the modified Cholesky decomposition to construct the covariance models and obtain the estimator of within-subject covariance matrix. In the second step, we construct unbiased estimating functions to obtain more efficient estimators of regression coefficients. However, the proposed estimating functions are discrete and non-convex. We utilize the induced smoothing method to achieve the fast and accurate estimates of parameters and their asymptotic covariance. Under some regularity conditions, we establish the asymptotically normal distributions for the resulting estimators. Simulation studies and the longitudinal progesterone data analysis show that the proposed approach yields highly efficient estimators.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Brown BM, Wang YG (2005) Standard errors and covariance matrices for smoothed rank estimators. Biometrika 92:149–158

    Article  MathSciNet  MATH  Google Scholar 

  • Fan J, Yao Q (1998) Efficient estimation of conditional variance functions in stochastic regression. Biometrika 85:645–660

    Article  MathSciNet  MATH  Google Scholar 

  • Fu L, Wang YG (2012) Quantile regression for longitudinal data with a working correlation model. Comput Stat Data Anal 56:2526–2538

    Article  MathSciNet  MATH  Google Scholar 

  • Fu L, Wang YG (2016) Efficient parameter estimation via Gaussian copulas for quantile regression with longitudinal data. J Multivar Anal 143:492–502

    Article  MathSciNet  MATH  Google Scholar 

  • Fu L, Wang YG, Zhu M (2015) A Gaussian pseudolikelihood approach for quantile regression with repeated measurements. Comput Stat Data Anal 84:41–53

    Article  MathSciNet  Google Scholar 

  • He X, Fu B, Fung WK (2003) Median regression of longitudinal data. Stat Med 22:3655–3669

    Article  Google Scholar 

  • Jung SH (1996) Quasi-likelihood for median regression models. J Am Stat Assoc 91:251–257

    Article  MathSciNet  MATH  Google Scholar 

  • Koenker R (2004) Quantile regression for longitudinal data. J Multivar Anal 91:74–89

    Article  MathSciNet  MATH  Google Scholar 

  • Koenker R (2005) Quantile regression. Number 38 in econometric society monographs. Combridge University Press, New York

    Google Scholar 

  • Leng C, Zhang W (2014) Smoothing combined estimating equations in quantile regression for longitudinal data. Stat Comput 24:123–136

    Article  MathSciNet  MATH  Google Scholar 

  • Leng C, Zhang W, Pan J (2010) Semiparametric mean-covariance regression analysis for longitudinal data. J Am Stat Assoc 105:181–193

    Article  MathSciNet  MATH  Google Scholar 

  • Leung D, Wang YG, Zhu M (2009) Efficient parameter estimation in longitudinal data analysis using a hybrid GEE method. Biostatistics 10:436–445

    Article  Google Scholar 

  • Liang KY, Zeger SL (1986) Longitudinal data analysis using generalized linear models. Biometrika 73:13–22

    Article  MathSciNet  MATH  Google Scholar 

  • Liang H, Liang H, Wang L (2014) Generalized additive partial linear models for clustered data with diverging number of covariates using GEE. Stat Sin 24:173–196

    MathSciNet  MATH  Google Scholar 

  • Liu S, Li G (2015) Varying-coefficient mean-covariance regression analysis for longitudinal data. J Stat Plan Inference 160:89–106

    Article  MathSciNet  MATH  Google Scholar 

  • Liu X, Zhang W (2013) A moving average Cholesky factor model in joint mean-covariance modeling for longitudinal data. Sci China Math 56:2367–2379

    Article  MathSciNet  MATH  Google Scholar 

  • Lu X, Fan Z (2015) Weighted quantile regression for longitudinal data. Comput Stat 30:569–592

    Article  MathSciNet  MATH  Google Scholar 

  • Mao J, Zhu Z, Fung WK (2011) Joint estimation of mean-covariance model for longitudinal data with basis function approximations. Comput Stat Data Anal 55:983–992

    Article  MathSciNet  MATH  Google Scholar 

  • Mu Y, Wei Y (2009) A dynamic quantile regression transformation model for longitudinal data. Stat Sin 19:1137–1153

    MathSciNet  MATH  Google Scholar 

  • Pourahmadi M (1999) Joint mean-covariance models with applications to longitudinal data: unconstrained parameterisation. Biometrika 86:677–690

    Article  MathSciNet  MATH  Google Scholar 

  • Qin G, Mao J, Zhu Z (2016) Joint mean-covariance model in generalized partially linear varying coefficient models for longitudinal data. J Stat Comput Simul 86:1166–1182

    Article  MathSciNet  Google Scholar 

  • Tang CY, Leng C (2011) Empirical likelihood and quantile regression in longitudinal data analysis. Biometrika 98:1001–1006

    Article  MathSciNet  MATH  Google Scholar 

  • Tang Y, Wang Y, Li J, Qian W (2015) Improving estimation efficiency in quantile regression with longitudinal data. J Stat Plan Inference 165:38–55

    Article  MathSciNet  MATH  Google Scholar 

  • Wang L (2011) GEE analysis of clustered binary with diverging number of covariates. Ann Stat 39:389–417

    Article  MathSciNet  MATH  Google Scholar 

  • Wang YG, Lin X, Zhu M (2005) Robust estimating functions and bias correction for longitudinal data analysis. Biometrics 61:684–691

    Article  MathSciNet  MATH  Google Scholar 

  • Xu P, Zhu L (2012) Estimation for a marginal generalized single-index longitudinal model. J Multivar Anal 105:285–299

    Article  MathSciNet  MATH  Google Scholar 

  • Yao W, Li R (2013) New local estimation procedure for a non-parametric regression function for longitudinal data. J R Stat Soc Ser B 75:123–138

    Article  MathSciNet  Google Scholar 

  • Ye H, Pan J (2006) Modelling of covariance structures in generalised estimating equations for longitudinal data. Biometrika 93:927–941

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang W, Leng C (2012) A moving average Cholesky factor model in covariance modeling for longitudinal data. Biometrika 99:141–150

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang D, Lin X, Raz J, Sowers M (1998) Semiparametric stochastic mixed models for longitudinal data. J Am Stat Assoc 93:710–719

    Article  MathSciNet  MATH  Google Scholar 

  • Zhao P, Li G (2013) Modified SEE variable selection for varying coefficient instrumental variable models. Stat Med 12:60–70

    MathSciNet  MATH  Google Scholar 

  • Zheng X, Fung W, Zhu Z (2013) Robust estimation in joint mean-covariance regression model for longitudinal data. Ann Inst Stat Math 65:617–638

    Article  MathSciNet  MATH  Google Scholar 

  • Zheng X, Fung W, Zhu Z (2014) Variable selection in robust joint mean and covariance model for longitudinal data analysis. Stat Sin 24:515–531

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are very grateful to the editor and two anonymous referees for their detailed comments on the earlier version of the manuscript, which led to a much improved paper. This work is supported by the Doctoral Grant of Southwest University (Grant No. SWU116015), the Scientific and Technological Research Program of Chongqing Municipal Education Commission (Grant Nos. KJ1703054, KJ130658, KJ1400521), the Fund of Chongqing Normal University (Grant No. 16XLB019) and the Basic and Frontier Research Program of Chongqing (Grant No. cstc2016jcyjA0510).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chaohui Guo.

Appendix

Appendix

To establish the asymptotic properties of the proposed estimators, the following regularity conditions are needed in this paper.

(C1) The distribution function \(F_{ij}(t)=p\left( {{Y_{ij}} - \varvec{X}_{ij}^T{\varvec{\beta } _\tau } \le t\left| {{\varvec{X}_{ij}}} \right. } \right) \) is absolutely continuous, with continuous densities \(f_{ij}\left( \cdot \right) \) uniformly bounded, and its first derivative \({{\dot{f}}_{ij}}\left( \cdot \right) \) uniformly bounded away from 0 and \(\infty \) at the points \(0, i=1,..,n, j=1,\ldots ,m_i\).

(C2) The true value \({\varvec{\beta }}_\tau \) is in the interior of a bounded convex region \( {\mathcal {B}}\).

(C3) When \(n\rightarrow \infty \), the number of repeated measurements \( m_i\) is bounded for each i.

(C4) For any positive definite matrix \({\varvec{W}}_i\), \({n^{ - 1}}\sum \nolimits _{i = 1}^n {\varvec{X}_i^T{\varvec{\Lambda } _i}{{\varvec{W}}_i}{\varvec{\Lambda } _i}{\varvec{X}_i}} \) converges to a positive definite matrix, where \({\varvec{\Lambda } _i}\) is an \(m_i \times m_i\) diagonal matrix with the jth diagonal element \({f_{ij}}\left( 0 \right) \). In addition, \({\sup _i}\left\| {{\varvec{X}_i}} \right\| < \infty \), where \(\left\| \cdot \right\| \) denotes the Euclidean norm.

(C5) Matrix \(\varvec{\Omega }\) is positive definite and \(\varvec{\Omega }=O\left( {{1 / n}} \right) \).

(C6) The differentiation of \({{\tilde{\varvec{U}}}_{w\tau } }({\varvec{\beta }}_\tau )\), \(-\frac{{\partial {{\tilde{\varvec{U}}}_{w\tau } }({\varvec{\beta }}_\tau )}}{{\partial \varvec{\beta }_\tau }}\) is positive definite with probability 1.

(C7) For \({{\varvec{ \upsilon } }_{ij}} = {\left( {\sum \limits _{l = 1}^{j - 1} {{\psi _\tau }\left( {{{ \varepsilon }_{il}}} \right) W_{j,l,1}^{(i)}} ,\ldots ,\sum \limits _{l = 1}^{j - 1} {{\psi _\tau }\left( {{{ \varepsilon }_{il}}} \right) W_{j,l,s}^{(i)}} } \right) ^T}\), we have

$$\begin{aligned} {\left( {N - n} \right) ^{ - 1}}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{\varvec{ \upsilon }_{ij}}\varvec{ \upsilon } _{ij}^T} } \mathop \rightarrow \limits ^p \varvec{\Lambda } > 0, \end{aligned}$$

where \(\mathop \rightarrow \limits ^p \) denotes convergence in probability.

Proof of Theorem 1

By the definition of \( {{\hat{\varvec{\upsilon }} }_{ij}} \), we have

$$\begin{aligned} {{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}= & {} {\left( {\sum \limits _{l = 1}^{j - 1} {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{il}}} \right) W_{j,l,1}^{(i)}} ,\ldots ,\sum \limits _{l = 1}^{j - 1} {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{il}}} \right) W_{j,l,s}^{(i)}} } \right) ^T} \\&- {\left( {\sum \limits _{l = 1}^{j - 1} {{\psi _\tau }\left( {{\varepsilon _{il}}} \right) W_{j,l,1}^{(i)}} ,\ldots ,\sum \limits _{l = 1}^{j - 1} {{\psi _\tau }\left( {{\varepsilon _{il}}} \right) W_{j,l,s}^{(i)}} } \right) ^T} \\= & {} \left( \sum \limits _{l = 1}^{j - 1} {\left( {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{il}}} \right) - {\psi _\tau }\left( {{\varepsilon _{il}}} \right) } \right) W_{j,l,1}^{(i)}} ,\ldots ,\right. \\&\left. \sum \limits _{l = 1}^{j - 1} {\left( {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{il}}} \right) - {\psi _\tau }\left( {{\varepsilon _{il}}} \right) } \right) W_{j,l,s}^{(i)}} \right) ^T, \end{aligned}$$

where \({{\hat{\varepsilon }} _{ij}}=Y_{ij}-\varvec{X}_{ij}^T{\hat{\varvec{\beta }}}_I\).

Similar to Fu and Wang (2012), we have

$$\begin{aligned} \sqrt{n} \left\{ {{{\hat{\varvec{\beta }} }_I} - {{\varvec{\beta }} _\tau }} \right\} \mathop \rightarrow \limits ^d N\left( {\mathbf {0},{\varvec{V}_I}} \right) , \end{aligned}$$
(18)

where

$$\begin{aligned} {\varvec{V}_I}= & {} \mathop {\lim }\limits _{n \rightarrow \infty } n{\left( {\sum \limits _{i = 1}^n {\varvec{X}_i^T{\varvec{\Lambda } _i}{\varvec{\Lambda } _i}{\varvec{X}_i}} } \right) ^{ - 1}}\left( {\sum \limits _{i = 1}^n {\varvec{X}_i^T{\varvec{\Lambda } _i}Cov \left( {{\varvec{S}_i}} \right) {\varvec{\Lambda } _i}{\varvec{X}_i}} } \right) \\&\times {\left( {\sum \limits _{i = 1}^n {\varvec{X}_i^T{\varvec{\Lambda } _i}{\varvec{\Lambda } _i}{\varvec{X}_i}} } \right) ^{ - 1}}. \end{aligned}$$

Thus, by conditions (C1) and (C4), we have

$$\begin{aligned} \frac{1}{{N - n}}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) {{\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) }^T}} } = {O_p}\left( {{n^{ - 1}}} \right) . \end{aligned}$$
(19)

Similarly, we have

$$\begin{aligned} \frac{1}{{N - n}}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) {\hat{\varvec{\upsilon }}} _{ij}^T} }= & {} \frac{1}{{N - n}}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) \varvec{\upsilon } _{ij}^T} } \\&+ \frac{1}{{N - n}}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) {{\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) }^T}} } \\= & {} \frac{1}{{N - n}}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) \varvec{\upsilon } _{ij}^T} } + {O_p}\left( {{n^{ - 1}}} \right) , \end{aligned}$$

and

$$\begin{aligned}&\left| {\frac{1}{{N - n}}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) \varvec{\upsilon } _{ij}^T} } } \right| \\&\quad \le {o_p}\left( 1 \right) \frac{1}{{N - n}}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {\left( {\left| {\sum \limits _{l = 1}^{j - 1} {{\psi _\tau }\left( {{\varepsilon _{il}}} \right) W_{j,l,1}^{(i)}} } \right| ,\ldots ,\left| {\sum \limits _{l = 1}^{j - 1} {{\psi _\tau }\left( {{{{{\varepsilon }} }_{il}}} \right) W_{j,l,s}^{(i)}} } \right| } \right) } } \\&\quad = {o_p}\left( 1 \right) . \end{aligned}$$

Thus, we obtain

$$\begin{aligned} \frac{1}{{N - n}}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) {\hat{\varvec{\upsilon }}} _{ij}^T} } = {o_p}\left( 1 \right) . \end{aligned}$$

By the fact that

$$\begin{aligned} \sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}}{\hat{\varvec{\upsilon }} }_{ij}}{\hat{\varvec{\upsilon }}} _{ij}^T= & {} \sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {\left\{ {\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) + {\varvec{\upsilon } _{ij}}} \right\} \left\{ {\left( {{\hat{\varvec{\upsilon }}} _{ij}^T - \varvec{\upsilon } _{ij}^T} \right) + \varvec{\upsilon } _{ij}^T} \right\} } } \\= & {} \sum \limits _{i = 1}^n \sum \limits _{j = 2}^{{m_i}} \left\{ \left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) {{\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) }^T} + \left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) {\hat{\varvec{\upsilon }}} _{ij}^T \right. \\&\left. + {\varvec{\upsilon } _{ij}}{{\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) }^T} + {\varvec{\upsilon } _{ij}}\varvec{\upsilon } _{ij}^T \right\} . \end{aligned}$$

Together with the condition (C7), we obtain

$$\begin{aligned} \frac{1}{{N - n}}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{{\hat{\varvec{\upsilon }} }_{ij}}{\hat{\varvec{\upsilon }}} _{ij}^T = } } \frac{1}{{N - n}}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{\varvec{\upsilon } _{ij}}\varvec{\upsilon } _{ij}^T + {o_p}} } \left( 1 \right) \mathop \rightarrow \limits ^p \varvec{\Lambda } \end{aligned}$$
(20)

as \(n\rightarrow \infty \). In addition

$$\begin{aligned}&\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{{\hat{\varvec{\upsilon }} }_{ij}}{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) } } \\&\quad =\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {\left\{ {\left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) \left( {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) - {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) } \right) } \right. } } \\&\qquad \left. { + \left( {{{\hat{\varvec{\upsilon }} }_{ij}} - {\varvec{\upsilon } _{ij}}} \right) {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) + {\varvec{\upsilon } _{ij}}\left( {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) - {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) } \right) + {\varvec{\upsilon } _{ij}}{\psi _\tau }\left( {{\varepsilon _{ij}}} \right) } \right\} \\&\quad \mathop {=}\limits ^{\Delta } \sqrt{N - n} \left( {{I_1} + {I_2} + {I_3} + {I_4}} \right) . \end{aligned}$$

Similar to the proof of (19), we have \(I_1=o_p\left( 1\right) \), \(I_2=o_p\left( 1\right) \) and \(I_3=o_p\left( 1\right) \). Furthermore,

$$\begin{aligned} {I_4}= & {} \frac{1}{{\sqrt{N - n} }}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{\varvec{\upsilon } _{ij}}{\psi _\tau }\left( {{\varepsilon _{ij}}} \right) } } \\= & {} \frac{1}{{\sqrt{N - n} }}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{\varvec{\upsilon } _{ij}}\varvec{\upsilon } _{ij}^T\varvec{\theta }_\tau } } + \frac{1}{{\sqrt{N - n} }}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{\varvec{\upsilon } _{ij}}{e_{ij,\tau }}} }. \end{aligned}$$

It remains to show that

$$\begin{aligned} \frac{1}{{\sqrt{N - n} }}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{\varvec{\upsilon } _{ij}}{e_{ij,\tau }}} } \mathop \rightarrow \limits ^d N\left( {\mathbf {0},\varvec{\Delta } } \right) . \end{aligned}$$
(21)

Then, combine (20) and (21) and use the Slutsky’s theorem, it follows that

$$\begin{aligned} \sqrt{N - n} \left( {{\hat{\varvec{\theta }}}_\tau - {\varvec{\theta }_\tau }} \right) \mathop \rightarrow \limits ^d N\left( {\mathbf {0},{\varvec{\Lambda } ^{ - 1}}\varvec{\Delta } {\varvec{\Lambda } ^{ - 1}}} \right) . \end{aligned}$$

Next, we prove (21). Note that for any vector \(s\times 1\) constant vector \(\varvec{\kappa }= {\left( {{\kappa _1},\ldots ,{\kappa _s}} \right) ^T}\) whose components are not all zero. Let \(\varvec{\Psi } = {\left( {N - n} \right) ^{ - 1}}\sum \nolimits _{i = 1}^n {\sum \nolimits _{j = 2}^{{m_i}} {\left( {{\varvec{\kappa }^T}{\varvec{\upsilon } _{ij}}} \right) {e_{ij,\tau }}} } \). It is easy to show that \(E\left( \varvec{\Psi }\right) = 0\) and

$$\begin{aligned} Var\left( {\sqrt{N - n} \varvec{\Psi } } \right) = \frac{1}{{N - n}}\sum \limits _{i = 1}^n {E{{\left\{ {\sum \limits _{j = 2}^{{m_i}} {\left( {{\varvec{\kappa }^T}{\varvec{\upsilon } _{ij}}} \right) {e_{ij,\tau }}} } \right\} }^2}} . \end{aligned}$$

Denote \({\xi _i} = \sum \nolimits _{j = 2}^{{m_i}} {\left( {{\varvec{\kappa }^T}{\varvec{\upsilon } _{ij}}} \right) {e_{ij,\tau }}} \). Then

$$\begin{aligned} {S^2} =Var\left( \sum \limits _{i = 1}^n { {{\xi _i}} }\right) = \sum \limits _{i = 1}^n {Var\left( {{\xi _i}} \right) } = \sum \limits _{i = 1}^n E{\left( \left( \sum \limits _{j = 2}^{{m_i}} {\left( {{\varvec{\kappa }^T}{\varvec{\upsilon } _{ij}}} \right) {e_{ij,\tau }}} \right) ^2\right) } . \end{aligned}$$

It follows easily by checking Lyapunov condition that if

$$\begin{aligned} \frac{{\sum \nolimits _{i = 1}^n {E{{\left| {{\xi _i}} \right| }^3}} }}{{{S^3}}} \rightarrow 0. \end{aligned}$$
(22)

Thus, we can conclude that (21) holds. Now, we only need to show (22) holds. By the condition (C3), we have

$$\begin{aligned} \frac{{\sum \limits _{i = 1}^n {E{{\left| {{\xi _i}} \right| }^3}} }}{{{S^3}}} \le \frac{{CnE{{\left( {\sum \limits _{k = 1}^s {\left| {{\kappa _k}} \right| \left| {{\upsilon _{ijk}}} \right| } } \right) }^3}}}{{{n^{{3 /2}}}{{\left[ {E{{\left\{ {\sum \limits _{k = 1}^s {{\kappa _k}{\upsilon _{ijk}}} } \right\} }^2}} \right] }^{{3 /2}}}}} = {O_p}\left( {{n^{{{ - 1} /2}}}} \right) = {o_p}\left( 1 \right) . \end{aligned}$$

Thus we complete the proof of Theorem 1. \(\square \)

Proof of Theorem 2

Note that

$$\begin{aligned} {\hat{e}}_{ij}^2= & {} {\left( {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) - {\hat{\varvec{\upsilon }}} _{ij}^T{\hat{\varvec{\theta }}}_\tau } \right) ^2} \\= & {} {\left( {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) - {\hat{\varvec{\upsilon }}} _{ij}^T{\hat{\varvec{\theta }}}_\tau + {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) - {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) } \right) ^2} \\= & {} {\left[ {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) - {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) - \left( {{\hat{\varvec{\upsilon }}} _{ij}^T{\hat{\varvec{\theta }}}_\tau - \varvec{\upsilon } _{ij}^T\varvec{\theta } _\tau } \right) + {d_{ij,\tau }}{\varsigma _{ij}}} \right] ^2} \\= & {} d_{ij,\tau }^2\varsigma _{ij}^2 + 2{d_{ij,\tau }}{\varsigma _{ij}}\left[ {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) - {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) - \left( {{\hat{\varvec{\upsilon }}} _{ij}^T{\hat{\varvec{\theta }}} _\tau - \varvec{\upsilon } _{ij}^T\varvec{\theta }_\tau } \right) } \right] \\&+ {\left[ {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) - {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) - \left( {{\hat{\varvec{\upsilon }}} _{ij}^T{\hat{\varvec{\theta }}}_\tau - \varvec{\upsilon } _{ij}^T\varvec{\theta }_\tau } \right) } \right] ^2} \end{aligned}$$

It follows from (A.3) of Fan and Yao (1998) that

$$\begin{aligned} {{{\hat{d}}}_\tau ^2}\left( t \right) - {d_\tau ^2}\left( t \right) = {I_1} + {I_2} + {I_3} + {I_4} + {o_p}\left( {{1}} \right) \left| {{I_1} + {I_2} + {I_3} + {I_4}} \right| \end{aligned}$$
(23)

where

$$\begin{aligned} {I_1}= & {} \frac{1}{{Nf_T\left( t \right) }}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{K_h}\left( {{t_{ij}} - t} \right) \left\{ {{d_\tau ^2}\left( {{t_{ij}}} \right) - {d_\tau ^2}\left( t \right) - {{\dot{d}}_\tau ^2}\left( t \right) \left( {{t_{ij}} - t} \right) } \right\} } } , \\ {I_2}= & {} \frac{1}{{Nf_T\left( t \right) }}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{K_h}\left( {{t_{ij}} - t} \right) \left\{ {{d_\tau ^2}\left( {{t_{ij}}} \right) \left( {\varsigma _{ij}^2 - 1} \right) } \right\} } } , \\ {I_3}= & {} 2\frac{1}{{Nf_T\left( t \right) }}\sum \limits _{i = 1}^n \sum \limits _{j = 2}^{{m_i}} {K_h}\left( {{t_{ij}} - t} \right) {d_{ij,\tau }}{\varsigma _{ij}}\\&\left[ {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) - {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) - \left( {{\hat{\varvec{\upsilon }} }_{ij}^T{\hat{\varvec{\theta }}}_\tau - \varvec{\upsilon } _{ij}^T\varvec{\theta }_\tau } \right) } \right] , \\ {I_4}= & {} \frac{1}{{Nf_T\left( t \right) }}{\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{K_h}\left( {{t_{ij}} - t} \right) \left[ {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) - {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) - \left( {{\hat{\varvec{\upsilon }} }_{ij}^T{\hat{\varvec{\theta }} }_\tau - \varvec{\upsilon } _{ij}^T\varvec{\theta }_\tau } \right) } \right] } } ^2}. \end{aligned}$$

It is easy to see that the Theorem 2 follows directly from statements (a)–(d) below

$$\begin{aligned}\begin{array}{l} (a)\,\,{I_1} = \frac{1}{2}{\mu _2}{h^2}{{{\ddot{d}}}_\tau ^2}\left( t \right) + {o_p}\left( {{h^2}} \right) , \\ (b)\,\,\sqrt{Nh} {I_2}\mathop \rightarrow \limits ^d N\left( {0,\Xi } \right) , \\ (c)\,\,{I_3} = {o_p}\left( { \frac{1}{{\sqrt{Nh} }}} \right) , \\ (d)\,\,{I_4} = {o_p}\left( { \frac{1}{{\sqrt{Nh} }}} \right) . \\ \end{array} \end{aligned}$$

It is easy to see that (a) follows from a Taylor expansion. \(I_2\) is asymptotically normal with mean 0 and variance

$$\begin{aligned} Var\left( {{I_2}} \right) = \frac{{{\nu _0}}}{{{N^2}hf_T\left( t \right) }}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {E\left[ {{{\left( {\varsigma _{ij}^2 - 1} \right) }^2}\left| {{t_{ij}} = t} \right. } \right] {d_\tau ^4}\left( t \right) } }. \end{aligned}$$

It follows from the definition of \(I_3\) that

$$\begin{aligned} {I_3}&= 2\frac{1}{{Nf_T\left( t \right) }}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{K_h}\left( {{t_{ij}} - t} \right) {d_{ij,\tau }}{\varsigma _{ij}}\left[ {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) - {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) } \right] } } \\&\quad - 2\frac{1}{{Nf_T\left( t \right) }}\sum \limits _{i = 1}^n {\sum \limits _{j = 2}^{{m_i}} {{K_h}\left( {{t_{ij}} - t} \right) {d_{ij,\tau }}{\varsigma _{ij}}\left( {{\hat{\varvec{\upsilon }}} _{ij}^T{\hat{\varvec{\theta }}}_\tau -\varvec{\upsilon } _{ij}^T\varvec{\theta }_\tau } \right) } } \\&\mathop {=}\limits ^{\Delta } {I_{31}} + {I_{32}}, \end{aligned}$$

By (18), we know that \({\hat{\varvec{\beta }}_{\varvec{I}} }\) is a root-n consistent estimator of \(\varvec{\beta }_\tau \), together with conditions (C1) and (C4), we have

$$\begin{aligned} {\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{ij}}} \right) - {\psi _\tau }\left( {{\varepsilon _{ij}}} \right) = {\psi _\tau }\left( {{Y_{ij}} -\varvec{X}_{ij}^T{{\hat{\varvec{\beta }} }_I}} \right) - {\psi _\tau }\left( {{Y_{ij}} - \varvec{X}_{ij}^T{\varvec{\beta } _\tau }} \right) = {O_p}\left( {{1 /{\sqrt{n} }}} \right) ,\nonumber \\ \end{aligned}$$
(24)

and

$$\begin{aligned} \left( {{\hat{\varvec{\upsilon }}} _{ij}^T - \varvec{\upsilon } _{ij}^T} \right) \varvec{\theta }_\tau= & {} \left( \sum \limits _{l = 1}^{j - 1} \left( {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{il}}} \right) - {\psi _\tau }\left( {{\varepsilon _{il}}} \right) } \right) W_{j,l,1}^{(i)},\ldots ,\right. \nonumber \\&\left. \sum \limits _{l = 1}^{j - 1} {\left( {{\psi _\tau }\left( {{{{\hat{\varepsilon }} }_{il}}} \right) - {\psi _\tau }\left( {{\varepsilon _{il}}} \right) } \right) W_{j,l,s}^{(i)}} \right) \varvec{\theta } _\tau \nonumber \\= & {} {O_p}\left( {{1 / {\sqrt{n} }}} \right) . \end{aligned}$$
(25)

Furthermore, by Theorem 1 and (24), (25), we have

$$\begin{aligned}&{\hat{\varvec{\upsilon }}} _{ij}^T{\hat{\varvec{\theta }}}_\tau - \varvec{\upsilon } _{ij}^T\varvec{\theta }_\tau \nonumber \\&\quad = \left( {\hat{\varvec{\upsilon }} _{{\varvec{ij}}}^{\varvec{T}} - \upsilon _{{\varvec{ij}}}^{\varvec{T}}} \right) \varvec{\theta } + \left( {\hat{\varvec{\upsilon }} _{{\varvec{ij}}}^{\varvec{T}} - {\varvec{\upsilon }} _{{\varvec{ij}}}^{\varvec{T}}} \right) \left( {\hat{\varvec{\theta }}}_\tau - \varvec{\theta } _\tau \right) + \varvec{\upsilon } _{ij}^T\left( {\hat{\varvec{\theta }}}_\tau - \varvec{\theta }_\tau \right) \nonumber \\&\quad = {O_p}\left( {{n^{{{ - 1} / 2}}}} \right) + {O_p}\left( {{n^{{{ - 1} /2}}}} \right) {O_p}\left( {{n^{{{ - 1} /2}}}} \right) + {O_p}\left( {{n^{{{ - 1} /2}}}} \right) \nonumber \\&\quad ={O_p}\left( {{n^{{{ - 1} / 2}}}} \right) . \end{aligned}$$
(26)

On the other hand, according to \(E\left( {{\varsigma _{ij}}|{t_{ij}}} \right) = 0\), \({{Var}}\left( {{\varsigma _{ij}}|{t_{ij}}} \right) = 1\), together with (24) and (26), we have

$$\begin{aligned} {I_{31}} = {o_p}\left( {{1 /{\sqrt{Nh} }}} \right) ,{I_{32}} ={o_p}\left( {{1 /{\sqrt{Nh} }}} \right) . \end{aligned}$$

Then \({I_{3}} = {o_p}\left( {{1 / {\sqrt{Nh} }}} \right) \). By the same arguments as proving \(\varvec{I}_3\), we have \({I_{4}} = {o_p}\left( {{1 / {\sqrt{Nh} }}} \right) \). Under the conditions \(h\rightarrow 0\), \(Nh \rightarrow \infty \) as \(n\rightarrow \infty \) and \(\lim {\sup _{n \rightarrow \infty }}N{h^5} < \infty \), then the proof of Theorem 2 is completed. \(\square \)

Proof of Theorem 3

By the similar arguments of Theorem 3.1 in Lu and Fan (2015), we can show that \({{\hat{\varvec{\beta }}_{{\varvec{w}}{\varvec{\tau }}} }}\) is a root-n consistent estimator of \(\varvec{\beta }_\tau \) and has the asymptotically normal distribution, and thus omitted. \(\square \)

Proof of Theorem 4

By the similar arguments of Lemma 3.1 in Lu and Fan (2015), we can complete the proof of Theorem 4, and thus omitted. \(\square \)

Proof of Corollary 1

By the similar arguments of Theorem 3.2 in Lu and Fan (2015), we can complete the proof of Corollary 1, and thus omitted. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lv, J., Guo, C. Efficient parameter estimation via modified Cholesky decomposition for quantile regression with longitudinal data. Comput Stat 32, 947–975 (2017). https://doi.org/10.1007/s00180-017-0714-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-017-0714-6

Keywords

Navigation