Abstract
Efficient estimation and variable selection in partial linear varying coefficient quantile regression model with longitudinal data is concerned in this paper. To improve estimation efficiency in quantile regression, based on B-spline basis approximation for nonparametric parts, we propose a new estimating function, which can incorporate the correlation structure between repeated measures. In order to reduce computational burdens, the induced smoothing method is used. The new method is empirically shown to be much more efficient and robust than the popular generalized estimating equations based methods. Under mild conditions, the asymptotically normal distribution of the estimators for the parametric components and the optimal convergence rate of the estimators for the nonparametric functions are established. Furthermore, to do variable selection, a smooth-threshold estimating equation is proposed, which can use the correlation structure and select the nonparametric and parametric parts simultaneously. Theoretically, the variable selection procedure works beautifully, including consistency in variable selection and oracle property in estimation. Simulation studies and real data analysis are included to show the finite sample performance.
Similar content being viewed by others
References
Brown B, Wang Y (2005) Standard errors and covariance matrices for smoothed rank estimators. Biometrika 92:149–158
Fan J, Huang T (2005) Profile likelihood inferences on semiparametric varying coefficient partially linear models. Bernoulli 11:1031–1057
Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96:1348–1360
Fu L, Wang Y (2012) Quantile regression for longitudinal data with a working correlation model. Comput Stat Data Anal 56:2526–2538
Fu L, Wang Y (2016) Efficient parameter estimation via Gaussian copulas for quantile regression with longitudinal data. J Multivar Anal 143:492–502
Fu L, Wang Y, Zhu M (2016) A Gaussian pseudolikelihood approach for quantile regression with repeated measurements. Comput Stat Data Anal 84:41–53
Huang J, Horowitz J, Wei F (2010) Variable selection in nonparametric additive models. Ann Stat 38:2282–2313
Huang J, Breheny P, Ma S (2012) A selective review of group selection in high-dimensional models. Stat Sci 27:481–499
Huang J, Wei F, Ma S (2012) Semiparametric regression pursuit. Stat Sin 22:1403–1426
Jung S (1996) Quasi-likelihood for median regression models. J Am Stat Assoc 91:251–257
Li R, Liang H (2008) Variable selection in semiparametric regression model. Ann Stat 36:261–286
Li J, Li Y, Zhang R (2017) B spline variable selection for the single index models. Stat Pap 58:691–706
Ma S, Song Q, Wang L (2013) Simultaneous variable selection and estimation in semiparametric modeling of longitudinal/clustered data. Bernoulli 19:252–274
Pollard D (1990) Empirical processes: theory and applications. In: NSF-CBMS regional conference series in probability and statistics. Institute of Mathematical Statistics, Hayward
Schumaker L (1981) Spline functions: basic theory. Wiley, New York
Stone C (1982) Optimal global rates of convergence for nonparametric regression. Ann Stat 10:1040–1053
Stone C (1985) Additive regression and other nonparametric models. Ann Stat 13:689–705
Tang Y, Wang H, Zhu Z (2013) Variable selection in quantile varying coefficient models with longitudinal data. Comput Stat Data Anal 57:435–449
Tibshirani RJ (1996) Regression shrinkage and selection via the lasso. J R Stat Soc B 58:267–288
Ueki M (2009) A note on automatic variable selection using smooth-threshold estimating equations. Biometrika 96:1005–1011
Wang N (2003) Marginal nonparametric kernel regression accounting for within-subject correlation. Biometrika 90:43–52
Wang Y, Carey V (2003) Working correlation structure misspecification, estimation and covariate design: implications for GEE performance. Biometrika 90:29–41
Wang H, Xia Y (2009) Shrinkage estimation of the varying coefficient model. J Am Stat Assoc 104:747–757
Wang K, Lin L (2015) Variable selection in semiparametric quantile modeling for longitudinal data. Commun Stat Theory Methods 44:2243–2266
Wang K, Lin L (2016) Robust and efficient estimator for simultaneous model structure identification and variable selection in generalized partial linear varying coefficient models with longitudinal data. Stat Pap. https://doi.org/10.1007/s00362-017-0890-z
Wang L, Li H, Huang J (2008) Variable selection in nonparametric varying coefficient models for analysis of repeated measurements. J Am Stat Assoc 103:1556–1569
Wang H, Zhu Z, Zhou J (2009) Quantile regression in partially linear varying coefficient models. Ann Stat 37:3841–3866
Wang L, Xue L, Qu A, Liang H (2014) Estimation and model selection in generalized additive partial linear models for correlated data with diverging number of covariates. Ann Stat 42:592–624
Wei F, Huang J, Li H (2011) Variable selection and estimation in high-dimensional varying-coefficient models. Stat Sin 21:1515–1540
Xue L (2009) Variable selection in additive models. Stat Sin 19:1281–1296
Xue L, Qu A (2012) Variable selection in high-dimensional varying coefficient models with global optimality. J Mach Learn Res 13:1973–1998
Yang H, Guo C, Lv J (2016) Variable selection for generalized varying coefficient models with longitudinal data. Stat Pap 57:115–132
Zhang R, Zhao W, Liu J (2013) Robust estimation and variable selection for semiparametric partially linear varying coefficient model based on modal regression. J Nonparametric Stat 25:523–544
Zhao P, Xue L (2009) Variable selection in semiparametric regression analysis for longitudinal data. Ann Inst Stat Math 64:213–231
Zhao P, Xue L (2010) Variable selection for semiparametric varying coefficient partially linear errors-in-variables models. J Multivar Anal 101:1872–1883
Zhao W, Zhang R, Liu J, Lv Y (2014) Robust and efficient variable selection for semiparametric partially linear varying coefficient model based on modal regression. Ann Inst Stat Math 66:165–191
Zhao W, Lian H, Liang H (2017) GEE analysis for longitudinal single-index quantile regression. J Stat Plan Inference 187:78–102
Zhou Y, Liang H (2009) Statistical inference for semiparametric varying-coefficient partially linear models with error-prone linear covariates. Ann Stat 37:427–458
Zhu Z, Fung W, He X (2008) On the asymptotics of marginal regression splines with longitudinal data. Biometrika 95:907–917
Zou H (2006) The Adaptive LASSO and Its Oracle Properties. J Am Stat Assoc 101:1418–1429
Zou H, Li R (2008) One-step sparse estimates in nonconcave penalized likelihood models. Ann Stat 36:1509–1533
Acknowledgements
Kangning Wang was supported by NSF Project (ZR2017BA002) of Shandong Province of China and NNSF Project (71673171) of China.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Proof of Theorem 2.1. Let \(\bar{\varvec{U}}_{\tau }^o(\varvec{\zeta })=-\sum _{i=1}^n\varvec{D}_{i}^T \varvec{\Lambda }_i\varvec{R}_i^{-1} \varvec{P}_i(\varvec{\zeta })\) with \(\varvec{P}_i(\varvec{\zeta })=\left( \tau -\Pr (y_{i1}-\varvec{d}_{i1}^T \varvec{\zeta }<0),\ldots ,\tau -\Pr (y_{im_i}-\varvec{d}_{im_i}^T \varvec{\zeta }<0)\right) ^T\). Then we can get that
where \(\varvec{a}_{ij}\) is a \((p(K_{n}+\hbar +1)+q)\times 1\) vector with \(\varvec{D}_{i}^T\varvec{\Lambda }_i \varvec{R}_i^{-1}=(\varvec{a}_{i1},\ldots ,\varvec{a}_{im_i})\). Under condition (C4) and from the law of large numbers (Pollard 1990), we have that
Therefore, \(\sup _{\varvec{\zeta }}\left\| \frac{1}{n} \left[ \bar{\varvec{U}}_{\tau }^o(\varvec{\zeta })-{\varvec{U}}_{\tau }^{o} (\varvec{\zeta })\right] \right\| =o\left( \frac{1}{\sqrt{n}}\right) \). By condition (C3), we know that \(\varvec{\zeta }_0\) is the solution of \(\bar{\varvec{U}}_{\tau }^o(\varvec{\zeta })=\varvec{0}\). Due to \(\tilde{\varvec{\zeta }}^o\) is the solution of the equation \({\varvec{U}}_{\tau }^o(\varvec{\zeta })=\varvec{0}\) together with condition (C4), hence \(\tilde{\varvec{\zeta }}^o\rightarrow _{p}\varvec{\zeta }_0\) as \(n\rightarrow \infty \). For any \(\varvec{\zeta }\) satisfying \(\Vert \varvec{\zeta }-\varvec{\zeta }_0\Vert =O(n^{-1/3})\),
According to Lemma 3 of Jung (1996), we have that
Therefore, \({\varvec{U}}_{\tau }^o(\varvec{\zeta })-{\varvec{U}}_{\tau }^o(\varvec{\zeta }_0) =\bar{\varvec{U}}_{\tau }^o(\varvec{\zeta })+o_{p}(\sqrt{n})\). By Taylor’s expansion of \(\bar{\varvec{U}}_{\tau }^o(\varvec{\zeta })\) together with \(\bar{\varvec{U}}_{\tau }^o(\varvec{\zeta }_0)=\varvec{0}\), we can obtain
where
Note that \(\tilde{\varvec{\zeta }}^o\) is in the \(n^{-1/3}\) neighborhood of \(\varvec{\zeta }_0\) and \({\varvec{U}}_{\tau }^o(\tilde{\varvec{\zeta }}^o)=\varvec{0}\), we have
To obtain the closed form expression of \(\tilde{\varvec{\beta }}^o\), similar to Ma et al. (2013), we write the inverse of \({\varvec{D}}_{\tau }(\varvec{\zeta }_0)\) as the following block form
where \(\varvec{D}^{11}=\left( \varvec{D}_{\varvec{X}\varvec{X}}-\varvec{D}_{\varvec{X} \varvec{\Pi }}\varvec{D}^{-1}_{\varvec{\Pi }\varvec{\Pi }}\varvec{D}_{\varvec{\Pi }\varvec{X}}\right) ^{-1}\), \(\varvec{D}^{22}=\left( \varvec{D}_{\varvec{\Pi }\varvec{\Pi }} -\varvec{D}_{\varvec{\Pi }\varvec{X}}\varvec{D}^{-1}_{\varvec{X}\varvec{X}}\varvec{D}_{\varvec{X} \varvec{\Pi }}\right) ^{-1}\), \(\varvec{D}^{12}=-\varvec{D}^{11}\varvec{D}_{\varvec{X}\varvec{\Pi }} \varvec{D}_{\varvec{\Pi }\varvec{\Pi }}^{-1}\) and \(\varvec{D}^{21}=-\varvec{D}^{22}\varvec{D}_{\varvec{\Pi } \varvec{X}}\varvec{D}_{\varvec{X}\varvec{X}}^{-1}\). Furthermore, let
Then
Thus,
Because \(\varvec{S}_i(\varvec{\zeta }_0)\) are independent random variables with mean zero, and
The multivariate central limit theorem implies that
furthermore,
by the law of large numbers. Then, by using Slutskys theorem, it follows that \(\sqrt{n}(\tilde{\varvec{\beta }}^o-\varvec{\beta }_0)\rightarrow _{d} N(\varvec{0}, \varvec{\Sigma }^{-1}\varvec{\Xi }\varvec{\Sigma }^{-1})\). This complete the proof of part (a). Furthermore, by using the same arguments of proving the part (a), we can get
The triangular inequality implies that
The proof is completed.
Proof of Theorem 2.2
where \(\varvec{H}_{n}^2=K_{n}\varvec{\Pi }^T\varvec{\Lambda }\varvec{\Pi }\). Furthermore, we standardize \(\widetilde{\varvec{z}}_{ij}=\varvec{\Xi }^{1/2}\varvec{\Sigma }^{-1}\varvec{z}_{ij}\) and \(\widetilde{\varvec{\Pi }}_{ij}=K_{n}^{1/2}\varvec{H}_{n}^{-1}\varvec{\Pi }_{ij}\). Note that \(\tilde{{S}}_{ij}({\varvec{\zeta }})-{{S}}_{ij}({\varvec{\zeta }}) =I(-\Delta _{ij})\Phi (-|\Delta _{ij}|)\), where \(\Delta _{ij}=(\epsilon _{ij}+u_{ij})/r_{ij}\) with \(u_{ij}=-(\varvec{\varsigma }\left( \varvec{\zeta }\right) ^T (\widetilde{\varvec{z}}_{ij}^T,\widetilde{\varvec{\Pi }}_{ij}^T)^T+R_{nij})\) and \(R_{nij}=\varvec{\Pi }_{ij}^T\varvec{\Theta }_0-\sum _{l=1}^px_{ij}^l \alpha _{0l}(t_{ij})\). We can obtain
Note that
where \(\eta (t)\) is between 0 and \(r_{ij}t-u_{ij}\). Because \(\int _{-\infty }^{\infty }\Phi (-|t|)\{2I(t<0)-1\}dt=0\), and by condition (C3), there exists a constant C such that \(\sup _{i,j}|f_{ij}^T(\eta (t))|\le C\). Then note that \(\int _{-\infty }^{\infty }\Phi (-|t|)|t|dt=1/2\), we have that
Under conditions (C4) and (C5), as \(n\rightarrow \infty \), we can obtain
In addition, by Cauchy–Schwartz inequality,
For \(j=1,\ldots ,m_i\),
where \(\eta \) is a positive value, and \(\eta \) lies between \((-r_{ij}c-u_{ij},r_{ij}c-u_{ij})\). Let \(c=n^{1/3}\), under condition (C5), since \(r_{ij}=O(n^{-1/2})\), then \(r_{ij}c=O(n^{-1/6})\). Note that \(\Phi ^2(-c)\rightarrow 0\) and \(r_{ij}cf_{ij}(\eta )\rightarrow 0\), as \(n\rightarrow \infty \). By conditions (C4) and (C6), it is easy to obtain \(\frac{1}{{n}}\text{ var }\left[ {\varvec{U}}_{\tau }^o (\varvec{\zeta })-\tilde{\varvec{U}}_{\tau }^o(\varvec{\zeta })\right] =o(1)\). Therefore, we have \(\frac{1}{\sqrt{n}}\left[ {\varvec{U}}_{\tau }^o (\varvec{\zeta })-\tilde{\varvec{U}}_{\tau }^o(\varvec{\zeta })\right] \rightarrow 0\) as \(n\rightarrow \infty \) for any \(\varvec{\zeta }\). The proof is completed.
Proof of Theorem 2.3
Note that \(\sup _{\varvec{\zeta }}\Vert \frac{1}{n}\left( \bar{{\varvec{U}}}_{\tau }^o (\varvec{\zeta })-{\varvec{U}}_{\tau }^o(\varvec{\zeta })\right) \Vert =o \left( \frac{1}{\sqrt{n}}\right) \), and by Theorem 2.2, we know that \(\sup _{\varvec{\zeta }}\Vert \frac{1}{n}\left( \bar{{\varvec{U}}}_{\tau }^o (\varvec{\zeta })-\tilde{{\varvec{U}}}_{\tau }^o(\varvec{\zeta })\right) \Vert =o \left( \frac{1}{\sqrt{n}}\right) \). Because that \(\varvec{\zeta }_0\) is the unique solution of equation of \(\bar{{\varvec{U}}}_{\tau }^o(\varvec{\zeta })=\varvec{0}\). This together with the definition of \(\tilde{\varvec{\zeta }}\) implies that \(\tilde{\varvec{\zeta }}\rightarrow _{p}\varvec{\zeta }_0\) as \(n\rightarrow \infty \). In order to prove the asymptotic normality of \(\tilde{\varvec{\zeta }}\), we first prove that \(\frac{1}{n}\{\tilde{\varvec{D}}_{\tau }(\varvec{\zeta }_0) -{\varvec{D}}_{\tau }(\varvec{\zeta }_0)\}\rightarrow _{p} 0\). Note that \(E\left[ \tilde{\varvec{D}}_{\tau }(\varvec{\zeta }_0)\right] -{\varvec{D}}_{\tau }(\varvec{\zeta }_0)=(D_{ij})_{i,j=1,\ldots ,n}\), with \(D_{ij}=\frac{1}{r_{ij}}E\phi \left( \frac{\epsilon _{ij} +u_{ij}}{r_{ij}}\right) -f_{ij}(0)\). Because that
where \(\eta (t)\) lies between 0 and \(r_{ij}t-u_{ij}\). By condition (C3), \(f_{ij}^T(\cdot )\) is uniformly bounded, hence there exists a constant C satisfying \(|f_{ij}^T(\eta (t))|\le C\), and by condition (C5), we have \(|\frac{1}{r_{ij}}E\phi (\frac{\epsilon _{ij}+u_{ij}}{r_{ij}})-f_{ij}(0)|\rightarrow 0\). By the strong law of large number, we know that \(\frac{1}{n}\tilde{\varvec{D}}_{\tau }(\varvec{\zeta }_0)\rightarrow E[\frac{1}{n}\tilde{\varvec{D}}_{\tau }(\varvec{\zeta }_0)]\). Using the triangle inequality, we have
which implies that \(\frac{1}{n}\left\{ \tilde{\varvec{D}}_{\tau }(\varvec{\zeta }_0) -{\varvec{D}}_{\tau }(\varvec{\zeta }_0)\right\} \rightarrow _{p}0\). By Taylor series expansion of \(\tilde{{\varvec{U}}}_{\tau }^o(\varvec{\zeta })\) around \(\varvec{\zeta }_0\), we have \(\tilde{{\varvec{U}}}_{\tau }^o(\varvec{\zeta })=\tilde{{\varvec{U}}}_{\tau }^o (\varvec{\zeta }_0)+\tilde{\varvec{D}}_{\tau }(\varvec{\zeta }^*)(\varvec{\zeta }-\varvec{\zeta }_0)\), where \(\varvec{\zeta }^*\) lies between \(\varvec{\zeta }\) and \(\varvec{\zeta }_0\). Because \(\tilde{{\varvec{U}}}_{\tau }^o(\tilde{\varvec{\zeta }})=\varvec{0}\) and \(\tilde{\varvec{\zeta }}\rightarrow \varvec{\zeta }_0\), we therefore obtain \(\varvec{\zeta }^*\rightarrow \varvec{\zeta }_0\) and \(\tilde{\varvec{D}}_{\tau }(\varvec{\zeta }^*)\rightarrow \tilde{\varvec{D}}_{\tau }(\varvec{\zeta }_0)\). Then by the same arguments used in the proof of Theorem 2.2, we can complete the proof of Theorem 2.3.
Proof (a) of Theorem 3.2. Let \(\delta _n=n^{-r/(2r+1)}\), \(\varvec{\beta }=\varvec{\beta }_0+\delta _n\varvec{T}_1\), \(\varvec{\Theta }=\varvec{\Theta }_0+\delta _n\varvec{T}_2\) and \(\varvec{T}=(\varvec{T}_1^T,\varvec{T}_2^T)^T\). Let \(\varvec{S}_n(\varvec{\beta },\varvec{\Theta })=\left( \varvec{I}_{p(K_{n}+\hbar +1)+q}-\hat{\varvec{\Psi }}\right) \tilde{\varvec{U}}_{\tau }^o(\varvec{\zeta })+\hat{\varvec{\Psi }}\varvec{\zeta }\). Our aim is to show that for \(\varepsilon >0\), there exists a constant \(C>0\), such that
for n large enough. This will imply with probability at least \(1-\varepsilon \) that there exists a local minimum value of the equation \(\varvec{S}_n(\varvec{\beta },\varvec{\Theta })=\varvec{0}\) such that \(\Vert \left( \hat{\varvec{\beta }}^T,\hat{\varvec{\Theta }}^T\right) ^T -\left( \varvec{\beta }_0^T,\varvec{\Theta }_0^T\right) ^T\Vert =O_{p}(\delta _n)\). We will evaluate the sign of \(\delta _n\varvec{T}^T\varvec{S}_n(\varvec{\beta }_0+\varvec{T}_1,\varvec{\Theta }_0+\varvec{T}_2)\) in the ball \(\{\varvec{\beta }_0+\varvec{T}_1,\varvec{\Theta }_0+\varvec{T}_2:\Vert \varvec{T}\Vert =C\}\). By the Taylor approximation, we have that
where \((\tilde{\varvec{\beta }}^T,\tilde{\varvec{\Theta }}^T)^T\) lies between \(\left( \varvec{\beta }_0^T,\varvec{\Theta }_0^T\right) ^T\) and \(\left( \varvec{\beta }_0^T,\varvec{\Theta }_0^T\right) ^T+\delta _n\varvec{T}\). Next we will consider \(I_{n1}\) and \(I_{n2}\) respectively. For \(I_{n1}\), by some elementary calculations, we have
By Cauchy–Schwarz inequality, we can derive that
Since \(\min \{\hat{\delta }_{1,k}:\hat{\delta }_{1,k}\ne 1\} \le \min \{\hat{\delta }_{1,k}:k=1,\ldots ,v\}\) and \(\min \{\hat{\delta }_{2,k}:\hat{\delta }_{2,k}\ne 1\}\le \min \{\hat{\delta }_{2,k}:k=1,\ldots ,c\}\). We only need to obtain the convergence rate of \(\min \{\hat{\delta }_{1,k}:k=1,\ldots ,v\}\) and \(\min \{\hat{\delta }_{2,k}:k=1,\ldots ,c\}\). By Theorem 2.3, we know that the initial estimator \((\tilde{\varvec{\beta }},\tilde{\varvec{\Theta }})\) satisfy \(\Vert (\tilde{\varvec{\beta }},\tilde{\varvec{\Theta }})-(\varvec{\beta }_0,\varvec{\Theta }_0) \Vert =O_{p}(n^{-r/(2r+1)})\). By using the condition \(n^{r/(2r+1)}\lambda _{\max }\rightarrow 0\), for any \(\varepsilon >0\) and \(k\in \{1,\ldots ,v\}\), we can derive that
which implies that for each \(k\in \{1,\ldots ,v\}\), \(\hat{\delta }_{1,k}=o_{p}(n^{-r/(2r+1)})\). Therefore, we can get that \(\min _{k=1,\ldots ,v}\hat{\delta }_{1,k}=o_{p}(n^{-r/(2r+1)})\). Similarly, we can prove that \(\hat{\delta }_{2,k}=o_{p}(n^{-r/(2r+1)})\), for each \(k\in \{1,\ldots ,c\}\). Therefore, we have that
Thus, we can obtain that
Furthermore, for the \(I_{n12}\), we have \(I_{n12}\le \delta _n\Vert \varvec{T}\Vert \Vert \varvec{\zeta }_0\Vert =O_{p}(\delta _n\Vert \varvec{T}\Vert )\). Therefore, \(I_{n1}=O_{p}(\sqrt{n}\delta _n^2\Vert \varvec{T}\Vert )\). For \(I_{n2}\), we can obtain that
With the same argument, it is easy to prove that \(I_{n22}=O_{p}(\delta _n^2)\Vert \varvec{T}\Vert =o(n\delta _n^2\Vert \varvec{T}\Vert )\). Thus, \(\delta _n\varvec{T}^T\varvec{S}_n(\varvec{\beta }_0+\varvec{T}_1,\varvec{\Theta }_0+\varvec{T}_2)\) is asymptotically dominated in probability by \(I_{n21}\) on \(\{\varvec{\beta }_0+\varvec{T}_1,\varvec{\Theta }_0+\varvec{T}_2:\Vert \varvec{T}\Vert =C\}\), which is positive for the sufficiently large C. This implies, with probability at least \(1-\varepsilon \), that there exists a local minimizer \(\left( \hat{\varvec{\beta }}^T,\hat{\varvec{\Theta }}^T\right) ^T\) such that \(\Vert \left( \hat{\varvec{\beta }}^T,\hat{\varvec{\Theta }}^T\right) ^T -\left( \varvec{\beta }_0^T,\varvec{\Theta }_0^T\right) ^T\Vert =O_{p}(\delta _n)\). Then by the same arguments used in the proof of Theorem 2.1, the proof can be completed.
Proof of Theorem 3.1
For any given \(k\in \{v+1,\ldots ,p\}\), we have \(\Vert \tilde{\varvec{\theta }}_k\Vert =O_p(-r/(2r+1))\), together with \(n^{(1+\tau )r/(2r+1)}\lambda _{\min }\rightarrow \infty \), we can derive that
This implies that \(\lim _{n\rightarrow \infty }\Pr \left( \hat{\delta }_{1,k}=1, \text{ for } \text{ all }~~k\in \{v+1,\ldots ,p\} \right) =1\). On the other hand, by the condition \(n^{r/(2r+1)}\lambda _{\max }\rightarrow 0\), for \(\varepsilon > 0\) and \(k\in \{1,\ldots ,v\}\), we have that \( \Pr \left( \hat{\delta }_{1,k}>n^{\frac{-r}{1+2r}}\varepsilon \right) \rightarrow 0\), which implies that for each \(k\in \{1,\ldots ,v\}\), \(\hat{\delta }_{1,k}=o_{p}(n^{-r/(2r+1)})\). Therefore, we can get that \(\lim _{n\rightarrow \infty }\Pr \left( \hat{\delta }_{1,k}<1, \text{ for } \text{ all } k\in \{1,\ldots ,v\} \right) =1\). The proof of (a) is completed. For part (b), apply the similar techniques as in part (a), we have, with probability tending to 1, that \(\hat{\delta }_{2,k}=1\) for \(k\in \{c+1,\ldots ,q\}\) and \(\hat{\delta }_{2,k}<1\) for \(k\in \{1,\ldots ,c\}\).
Proof (b) of Theorem 3.2. It can be proved by using the same method used in the proof of Theorem 2.2 in Wang and Lin (2016), we omit the detail for saving space.
Rights and permissions
About this article
Cite this article
Wang, K., Sun, X. Efficient parameter estimation and variable selection in partial linear varying coefficient quantile regression model with longitudinal data. Stat Papers 61, 967–995 (2020). https://doi.org/10.1007/s00362-017-0970-0
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-017-0970-0
Keywords
- Semiparametric model
- Longitudinal data
- Basis spline
- Quantile regression
- Variable selection
- Oracle property