Abstract
Quantile regression is a powerful complement to the usual mean regression and becomes increasingly popular due to its desirable properties. In longitudinal studies, it is necessary to consider the intra-subject correlation among repeated measures over time to improve the estimation efficiency. In this paper, we focus on longitudinal single-index models. Firstly, we apply the modified Cholesky decomposition to parameterize the intra-subject covariance matrix and develop a regression approach to estimate the parameters of the covariance matrix. Secondly, we propose efficient quantile estimating equations for the index coefficients and the link function based on the estimated covariance matrix. Since the proposed estimating equations include a discrete indicator function, we propose smoothed estimating equations for fast and accurate computation of the index coefficients, as well as their asymptotic covariances. Thirdly, we establish the asymptotic properties of the proposed estimators. Finally, simulation studies and a real data analysis have illustrated the efficiency of the proposed approach.
Similar content being viewed by others
References
Cui, X., Härdle, W. K., Zhu, L. (2011). The EFM approach for single-index models. The Annals of Statistics, 39, 1658–688.
de Boor, C. (2001). A practical guide to splines. New York: Springer.
Guo, C., Yang, H., Lv, J., Wu, J. (2016). Joint estimation for single index mean-covariance models with longitudinal data. Journal of the Korean Statistical Society, 45, 526–543.
Horowitz, J. L. (1998). Bootstrap methods for median regression models. Econometrica, 66, 1327–1351.
Jung, S. (1996). Quasi-likelihood for median regression models. Journal of the American Statistical Association, 91, 251–257.
Lai, P., Wang, Q., Lian, H. (2012). Bias-corrected GEE estimation and smooth-threshold GEE variable selection for single-index models with clustered data. Journal of Multivariate Analysis, 105, 422–432.
Leng, C., Zhang, W., Pan, J. (2010). Semiparametric mean-covariance regression analysis for longitudinal data. Journal of the American Statistical Association, 105, 181–193.
Li, Y. (2011). Efficient semiparametric regression for longitudinal data with nonparametric covariance estimation. Biometrika, 98(2), 355–370.
Liang, K. Y., Zeger, S. L. (1986). Longitudinal data analysis using generalized linear models. Biometrika, 73, 13–22.
Lin, H., Zhang, R., Shi, J., Liu, J., Liu, Y. (2016). A new local estimation method for single index models for longitudinal data. Journal of Nonparametric Statistics, 28, 644–658.
Liu, S., Li, G. (2015). Varying-coefficient mean-covariance regression analysis for longitudinal data. Journal of Statistical Planning and Inference, 160, 89–106.
Liu, X., Zhang, W. (2013). A moving average Cholesky factor model in joint mean-covariance modeling for longitudinal data. Science China Mathematics, 56, 2367–2379.
Lv, J., Guo, C., Yang, H., Li, Y. (2017). A moving average Cholesky factor model in covariance modeling for composite quantile regression with longitudinal data. Computational Statistics and Data Analysis, 112, 129–144.
Ma, S., He, X. (2016). Inference for single-index quantile regression models with profile optimization. The Annals of Statistics, 44, 1234–1268.
Ma, S., Song, P. X.-K. (2015). Varying index coefficient models. Journal of the American Statistical Association, 110, 341–356.
Mao, J., Zhu, Z., Fung, W. K. (2011). Joint estimation of mean-covariance model for longitudinal data with basis function approximations. Computational Statistics and Data Analysis, 55, 983–992.
Pollard, D. (1990). Empirical processes: Theories and applications. Hayward, CA: Institute of Mathematical Statistics.
Wang, H., Zhu, Z. (2011). Empirical likelihood for quantile regression models with longitudinal data. Journal of Statistical Planning and Inference, 141, 1603–1615.
Xu, P., Zhu, L. (2012). Estimation for a marginal generalized single-index longitudinal model. Journal of Multivariate Analysis, 105, 285–299.
Yao, W., Li, R. (2013). New local estimation procedure for a non-parametric regression function for longitudinal data. Journal of the Royal Statistical Society: Series B, 75, 123–138.
Ye, H., Pan, J. (2006). Modelling of covariance structures in generalised estimating equations for longitudinal data. Biometrika, 93, 927–941.
Zhang, D., Lin, X., Raz, J., Sowers, M. (1998). Semiparametric stochastic mixed models for longitudinal data. Journal of the American Statistical Association, 93, 710–719.
Zhang, W., Leng, C. (2012). A moving average Cholesky factor model in covariance modeling for longitudinal data. Biometrika, 99, 141–150.
Zhao, W., Lian, H., Liang, H. (2017). GEE analysis for longitudinal single-index quantile regression. Journal of Statistical Planning and Inference, 187, 78–102.
Zheng, X., Fung, W. K., Zhu, Z. (2014). Variable selection in robust joint mean and covariance model for longitudinal data analysis. Statistica Sinica, 24, 515–531.
Acknowledgements
The authors are very grateful to the editor and anonymous referees for their detailed comments on the earlier version of the manuscript, which led to a much improved paper. This work is supported by the National Social Science Fund of China (Grant No. 17CTJ015).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
In the proofs, C denotes a positive constant that might assume different values at different places. For any matrix \({\varvec{A}} = \left( {{A_{ij}}} \right) _{i = 1,j = 1}^{s,t}\), denote \({\left\| {\varvec{A}} \right\| _\infty } = {\max _{1 \le i \le s}}\sum _{j = 1}^t {\left| {{A_{ij}}} \right| } \). To establish the asymptotic properties of the proposed estimators, the following regularity conditions are needed in this paper.
(C1) Let \(\mathscr {U}=\left\{ {u:u = {\varvec{X}}_{ij}^T{\beta } ,{{\varvec{X}}_{ij}} \in A,i = 1,\ldots ,n,j = 1,\ldots ,{m_i}} \right\} \) and A be the support of \({\varvec{X}}_{ij}\) which is assumed to be compact. Suppose that the density function \(f_{{\varvec{X}}_{ij}^T{\beta }}(u)\) of \({\varvec{X}} _{ij}^T{\beta }\) is bounded away from zero and infinity on \(\mathscr {U}\) and satisfies the Lipschitz condition of order 1 on \(\mathscr {U}\) for \({\beta }\) in a neighborhood of \({\beta }_0\).
(C2) The function \(g_0\left( \cdot \right) \) has the dth bounded and continuous derivatives for some \(d\ge 2\) and \(g_{1s}(\cdot )\) satisfies the Lipschitz condition of order 1, where \(g_{1s}\left( u\right) \) is the sth component of \({\varvec{g}}_1\left( u\right) =E\left( {{{\varvec{X}}_{ij}}\left| {{\varvec{X}}_{ij}^T{{\beta } _0} = u} \right. } \right) \), \(s=1,\ldots ,p\).
(C3) Let the distance between neighboring knots be \({H_{i}} = {\xi _{i}} - {\xi _{i - 1}}\) and \(H = {\max _{1 \le i \le {N_n} + 1}}\left\{ {{H_{i}}} \right\} \). Then, there exists a constant \(C_{0}\) such that \(\frac{{{H}}}{{{{\min }_{1 \le i \le {N_n} + 1}}\left\{ {{H_{i}}} \right\} }} < {C_{0}}, {\max _{1 \le i \le {N_n}}}\left\{ {{H_{i + 1}} - {H_{i}}} \right\} = o({N_n^{ - 1}})\).
(C4) The distribution function \(F_{ij}(t)=p\left( {{Y_{ij}} - g_0\left( {\varvec{X}}_{ij}^T{{\beta } _0} \right) \le t} \right) \) is absolutely continuous, with continuous densities \(f_{ij}\left( \cdot \right) \) uniformly bounded, and its first derivative \({f'_{ij}}\left( \cdot \right) \) uniformly bounded away from 0 and \(\infty \) at the point \(0, i=1,\ldots ,n, j=1,\ldots ,m_i\).
(C5) The eigenvalues of \({\varvec{\varSigma }}_{\tau i}\) are uniformly bounded and bounded away from zero.
(C6) \(K\left( \cdot \right) \) is bounded and compactly supported on \([-1,1]\). For some constant \(C_K\ne 0\), \(K\left( \cdot \right) \) is a \(\nu \)th-order kernel, i.e., \(\int {{u^j}} K\left( u \right) du = 1\) if \(j=0\); 0 if \(1 \le j \le \nu - 1\); \(C_K\) if \(j=\nu \), where \(\nu \ge 2\).
(C7) The positive bandwidth parameter h satisfies \(n{h^{2 \nu }} \rightarrow 0\).
Lemma 1
Under conditions (C1)–(C7), and \(N_n\rightarrow \infty \) and \(n N_n^{-1} \rightarrow \infty \), as \(n\rightarrow \infty \), we have (i) \(\left| {{{\hat{g}}}({u};{{\beta } _0}) - {g_0}({u})} \right| = O_p\left( {\sqrt{{{{N_n}} \big / n}} + N_n^{ - d}} \right) \) uniformly for any \(u\in [a,b]\); and (ii) under \(N_n\rightarrow \infty \) and \(n N_n^{-3} \rightarrow \infty \), as \(n\rightarrow \infty \), \(\left| {{{\hat{g}'}}({u};{{\beta } _0}) - {g'_0}({u})} \right| = O_p\left( {\sqrt{{{N_n^3} \big / n}} + N_n^{ - d + 1}} \right) \) uniformly for any \(u\in [a,b]\).
Proof
Suppose \(g^0(u)={\varvec{B}}_q(u)^T {\varvec{\theta }}^0\) is the best approximating spline function for \({g}_0(u)\). According to the result on page 149 of de Boor (2001) for \({g}_0(u)\) satisfying condition (C2), we have
Let \({\alpha _n} =N_n n^{-1/2} + N_n^{ - d+1/2}\) and set \(\left\| {{{\varvec{u}}_n}} \right\| = C\), where C is a large enough constant. Our aim is to show that for any given \(\delta > 0,\) there is a large constant C such that, for large n, we have
This implies that there is local minimum \({{\hat{{\varvec{\theta }}}} }\) in the ball \(\left\{ {{{\varvec{\theta }} ^0} + {\alpha _n}{\varvec{u}}_n:\left\| {\varvec{u}}_n \right\| \le C} \right\} \) with probability tending to one, such that \(\left\| {{\hat{{\varvec{\theta }}}} - {{\varvec{\theta }} ^0}} \right\| = {O_p}\left( {{\alpha _n}} \right) \). Define \({\varDelta _{ij}} = {{\varvec{B}}_q}{\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) ^T}{{\varvec{\theta }} ^0} - {g_0}\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) \). Applying the identity
we have
The observed covariates vector is written as \(\mathscr {D}=\Big \{ {\varvec{X}}_{11}^T,\ldots ,{\varvec{X}}_{1{m_1}}^T,\ldots ,{\varvec{X}}_{n1}^T,\ldots ,{\varvec{X}}_{n{m_n}}^T \Big \}^T\). Moreover, we have
and
In addition,
Moreover, the condition that \(\varepsilon _{ij}\) has the \(\tau \)th quantile zero implies \(E\left( {{\psi _\tau }\left( {{\varepsilon _{ij}}} \right) } \right) = 0\). By (11) and condition (C4), we have \(E\left( I\right) = o\left( 1\right) \) and
implies that \(I= {O_p}\left( {\sqrt{{{n\alpha _n^2} \big / {{N_n}}}} } \right) \left\| {{{\varvec{u}}_n}} \right\| \). Based on all the above, \({L_n}\left( {{{\beta } _0};{{\varvec{\theta }} ^0} + {\alpha _n}{{\varvec{u}}_n}} \right) - {L_n}\left( {{{\beta } _0};{{\varvec{\theta }} ^0}} \right) \) is dominated by \(\frac{1}{2}\alpha _n^2{\varvec{u}}_n^T\left( {\sum _{i = 1}^n {\sum _{j = 1}^{{m_i}} {{f_{ij}}\left( 0 \right) {{\varvec{B}}_q}\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) {{\varvec{B}}_q}{{\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) }^T}} } } \right) {{\varvec{u}}_n}\) by choosing a sufficiently large \(\left\| {{{\varvec{u}}_n}} \right\| =C\). Therefore, (12) holds and there exists a local minimizer \({\hat{{\varvec{\theta }}}}\) such that
Since \(\left\| {{{\varvec{B}}_q}\left( u \right) {{\varvec{B}}_q}{{\left( u \right) }^T}} \right\| = O\left( 1/N_n \right) \), together with (13), we have
By the triangle inequality, \(\left| {\hat{g}\left( {u;{{\beta } _0}} \right) - {g_0}\left( u \right) } \right| \le \left| {\hat{g}\left( {u;{{\beta } _0}} \right) - {g^0}\left( u \right) } \right| + \big | {g^0}\left( u \right) - {g_0}\left( u \right) \big |\). Therefore, by (11) and (14), we have \(\left| {\hat{g}\left( {u;{{\beta } _0}} \right) - g_0\left( u \right) } \right| = {O_p}\left( {N_n^{ - d} + \sqrt{{{{N_n}} \big / n}} } \right) \) uniformly for every \(u\in [a,b]\).
Since \({{\hat{g}'}}({u};{\beta } _0)={\varvec{B}}_{q-1}(u)^T {\varvec{D}}_1 {\hat{{\varvec{\theta }}}} ({\beta }_0)\), where \({\varvec{B}}_{q-1}(u)=\{B_{s,q}(u): 2\le s \le J_n\}^T\) is the \((q-1)\)th-order B-spline basis and \({\varvec{D}}_1\) is defined in Sect. 2.1. It is easy to prove that \({\left\| {{{\varvec{D}}_1}} \right\| _\infty } = O({N_n})\). Then, employing similar techniques to that used in the proof of \({{\hat{g}}}({u};{\beta } _0)\), we obtain that
uniformly for any \(u\in [a,b]\). This completes the proof. \(\square \)
Lemma 2
Under conditions (C1)–(C7), and the number of knots satisfies \({n^{{1 / {(2d + 1)}}}} \ll {N_n} \ll {n^{{1 / 4}}}\), then for any \(J_n \times 1\)-vector \({\varvec{c}}_n\) whose components are not all 0, we have
where \(\bar{\sigma } _n^2\left( u \right) = {\varvec{c}}_n^T{{\varvec{V}}^{ - 1}}\left( {{{\beta } _0}} \right) \sum _{i = 1}^n {{\varvec{B}}_q^T\left( {{{\varvec{X}}_i}{{\beta } _0}} \right) {{\varvec{{\varLambda } }}_i}{{\varvec{\varSigma }} _{\tau i}}{{{\varvec{\varLambda }} }_i}{{\varvec{B}}_q}\left( {{{\varvec{X}}_i}{{\beta } _0}} \right) } {{\varvec{V}}^{ - 1}}\left( {{{\beta } _0}} \right) {{\varvec{c}}_n}\) and the definition of \({\varvec{V}}\left( {{{\beta } _0}} \right) \) is given in subsection 2.4.
Proof
When \({\beta }={\beta }_0\), the minimizer \({\hat{{\varvec{\theta }}} }\) of (1) satisfies the score equations
Then, the left-hand side of Eq. (15) becomes
where \({\zeta _{ij}} = {{\varvec{B}}_q}{\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) ^T}{\varvec{\theta }} - {g_0}\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) \). By (11), taking Taylor’s explanation for \({{F_{ij}}\left( {{\zeta _{ij}}} \right) }\) at 0 gives
By direct calculation of the mean and variance, we can show that \( III = {o_p}\left( {\sqrt{n/{N_n}} } \right) \). This combined with (15)–(17) leads to
It is easy to derive that \(I = \sum _{i = 1}^n {{\varvec{B}}_q^T\left( {{{\varvec{X}}_i}{{\beta } _0}} \right) {\varvec{\varLambda }}_i{\psi _\tau }\left( {{{\varvec{\varepsilon }}_i}} \right) }\) is a sum of independent vector, \( E\left( I\right) =0\) and \(Cov\left( I \right) = \sum _{i = 1}^n {{\varvec{B}}_q^T\left( {{{\varvec{X}}_i}{{\beta } _0}} \right) {\varvec{\varLambda }}_i {{\varvec{\varSigma }} _{\tau i}} {\varvec{\varLambda }}_i {{\varvec{B}}_q}\left( {{{\varvec{X}}_i}{{\beta } _0}} \right) } .\) By the multivariate central limit theorem and the Slutsky’s theorem, we can complete the proof. \(\square \)
Lemma 3
Under conditions (C1)–(C7) and the number of knots satisfies \({n^{{1 /{(2d + 2)}}}} \ll {N_n} \ll {n^{{1 / 4}}}\), we have
Proof
Define \({\varvec{H}}_i^T = {{\varvec{J}}_{{{\beta } ^{(r)}}}^T{\hat{{\varvec{X}}}}_i^T {\hat{{\varvec{G}}}'} \left( {{{\varvec{X}}_i}{\beta } ;{\beta } } \right) }\), \({\varvec{S_{i}}}=(S_{i1},\ldots ,S_{im_i})^T\) with \({S_{ij}} ={S_{ij}}\left( {\beta } \right) ={\psi _\tau }\left( {{Y_{ij}} - \hat{g}\left( {{\varvec{X}}_{ij}^T{\beta } ;{\beta } } \right) } \right) =I\left( {Y_{ij}}-{\hat{g}\left( {{\varvec{X}}_{ij}^T{\beta } ;{\beta } } \right) }\le 0 \right) - \tau \) being a discontinuous function, then \(R\left( {\beta } ^{(r)} \right) = \sum _{i = 1}^n { {{\varvec{H}}_i^T {\varvec{\varLambda }} _i{{\varvec{S}}_{i}}\left( {\beta } \right) } } \). Let \(\bar{R}\left( {\beta } ^{(r)} \right) = \sum _{i = 1}^n { {{\varvec{H}}_i^T{\varvec{\varLambda }} _i{{\varvec{P}}_{i}}\left( {\beta } \right) } } \), where \({\varvec{P_{i}}}=(P_{i1},\ldots ,P_{im_i})^T\) with \(P_{ij}={P_{ij}}\left( {\beta } \right) =p\left( {{Y_{ij}} - \hat{g}\left( {{\varvec{X}}_{ij}^T{\beta } ;{\beta } } \right) \le 0} \right) - \tau .\) For any \({\beta }^{(r)}\) satisfying \(\left\| {\beta }^{(r)}-{\beta }_0^{(r)}\right\| \le C{n^{{{ - 1} / 2}}}\), we have
At first, the first term can be written as
where \({\varvec{H}}_i^T = \left( {{{\varvec{h}}_{i1}},\ldots ,{{\varvec{h}}_{i{m_i}}}} \right) \) and \({\varvec{h}}_{ij}\) is a \((p-1)\times 1\) vector. According to Lemma 3 in Jung (1996) and Lemma 1, we have \(\sup \left| \varUpsilon \right| = {o_p}\left( {\sqrt{n} } \right) \). Then, the first term becomes
By the law of large numbers (Pollard 1990), together with Lemma 1, the second term becomes
Therefore, \( { R\left( {\beta }^{(r)} \right) - R\left( {\beta }_0^{(r)} \right) } =\bar{R}\left( {\beta } ^{(r)} \right) + {o_p}\left( {\sqrt{n} } \right) \). By Taylor’s expansion of \(\bar{R}\left( {\beta } ^{(r)} \right) \), we can obtain
Because \(R\left( {\hat{{\beta }}}^{(r)} \right) = 0\) and \({\hat{{\beta }}}^{(r)}\) is in the \(n^{-1/2}\) neighborhood of \({\beta }_0^{(r)}\), we have
where \(R\left( {{\beta }_0 ^{(r)}} \right) =\sum _{i = 1}^n {{\varvec{J}}_{{{\beta }^{(r)}}}^T{\hat{{\varvec{X}}}}_i^T{\hat{{\varvec{G'}}}}\left( {{{\varvec{X}}_i}{{\beta } };{{\beta } }} \right) {\varvec{\varLambda }} _i{{\varvec{S}}_{i}}\left( {\beta } \right) } |_{{\beta } ^{(r)}={\beta }_0^{(r)}} \), \({{\varvec{S}}_{i}}\left( {\beta }_0 \right) = {\left( {{S_{i1}}\left( {\beta }_0 \right) ,\ldots ,{S_{i{m_i}}}\left( {\beta }_0 \right) } \right) ^T}\) with \({S_{ij}}\left( {\beta } _0 \right) = {I\left( {{Y_{ij}} - \hat{g}\left( {{\varvec{X}}_{ij}^T{{\beta } _0};{{\beta } _0}} \right) \le 0} \right) }-\tau \) and
Thus, we have
where \(S_{ij}^*\left( {{{\beta } _0}} \right) =I\left( {{Y_{ij}} - g_0\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) \le 0} \right) - \tau \). Moreover, \(\Big | \hat{g}\left( {{\varvec{X}}_{ij}^T{{\beta } _0};{{\beta } _0}} \right) - g_0\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) \Big | = {O_p}\left( {\sqrt{{{{N_n}} \big / n}} + N_n^{ - d}} \right) \) and \(E\left\{ {I\left( {{Y_{ij}} - {g_0}\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) \le 0} \right) } \right\} = p\left( {{Y_{ij}} - {g_0}\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) \le 0} \right) = \tau \), we have \(E\left( {S_{ij}}\left( {{{\beta } _0}} \right) \right) =o\left( 1\right) \). Therefore, we have \(E (\varvec{S}_i (\beta _0)) = o (1)\) and
Based on Lemma 1, together with \({\varvec{S}}_{i}\left( {{{\varvec{\beta }}_0}} \right) \) are the independent random variables, the multivariate central limit theorem implies that
By the law of large numbers and Lemma 1, we have
Then, combine (18)–(20) and use the Slutsky’s theorem; it follows that
According to the result of (21) and the multivariate delta method, we have
\(\square \)
Proof of Theorem 1
Using conditions (C4), (C6), and (C7), similar to Lemma 3 (k) of Horowitz (1998), we obtain \({n^{{{ - 1} / 2}}}\tilde{R}\left( {{\beta }_0 ^{(r)}} \right) = {n^{{{ - 1} / 2}}}R\left( {{\beta }_0 ^{(r)}} \right) + {o_p}\left( 1 \right) \). In order to prove the asymptotic normality of \(\tilde{{\beta }}^{(r)}\), we need to prove \({n^{ - 1}}\left\{ {\tilde{D}\left( {{\beta } _0^{(r)}} \right) - D\left( {{\beta } _0^{(r)}} \right) } \right\} \mathop \rightarrow \limits ^p 0\), where
It is easy to get that
where \({\varvec{h}}_{ij}\) is given in the proof of Lemma 3. Because
where \(\varsigma _t\) is between \( \hat{g}\left( {{\varvec{X}}_{ij}^T{{\beta } _0};{{\beta } _0}} \right) -{g_0}\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) \) and \(ht+ \hat{g}\left( {{\varvec{X}}_{ij}^T{{\beta } _0};{{\beta } _0}} \right) -{g_0}\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) \). By condition (C4), \({f'_{ij}}\left( \cdot \right) \) is uniformly bounded; hence, there exists a constant M satisfying \(\left| {{f'_{ij}}\left( {{\varsigma _t}} \right) } \right| \le M\). Combining \(\left| {\hat{g}\left( {{\varvec{X}}_{ij}^T{{\beta } _0};{{\beta } _0}} \right) - g_0\left( {{\varvec{X}}_{ij}^T{{\beta } _0}} \right) } \right| = {O_p}\left( {\sqrt{{{{N_n}} \big / n}} + N_n^{ - r}} \right) \) with conditions (C4), (C6), and (C7), we have
So we can obtain \(\left| {{n^{ - 1}}\left\{ E\left\{ {\tilde{D}\left( {{{\beta } _0^{(r)}}} \right) } \right\} - D\left( {{{\beta } _0^{(r)}}} \right) \right\} } \right| \rightarrow 0\). By the strong law of large number, we have \({{n^{ - 1}}\tilde{D}\left( {{{\beta } _0^{(r)}}} \right) \rightarrow E\left( {{n^{ - 1}}\tilde{D}\left( {{{\beta } _0^{(r)}}} \right) } \right) }\). Using the triangle inequality, we have
Furthermore, by the Taylor series expansion of \(\tilde{R}\left( {{{\beta } ^{(r)}}} \right) \) around \({\beta }_0^{(r)}\), we have
where \({\beta } ^*\) lies between \({\beta }^{(r)}\) and \({\beta }_0^{(r)}\). Let \({\beta }^{(r)}={\tilde{{\beta }}}^{(r)}\), we have
because of \(\tilde{R}\left( {\tilde{{\beta }} }^{(r)} \right) =0\). Since \({\tilde{{\beta }}}^{(r)}\rightarrow {\beta }_0^{(r)}\), we can obtain \( {\beta }^*\rightarrow {\beta }_0^{(r)}\) and \({{\tilde{D}}^{ - 1}}\left( {{{\beta } ^*}} \right) \rightarrow {{\tilde{D}}^{ - 1}}\left( {{{\beta } _0^{(r)}}} \right) \). Since \({n^{ - 1}}\left\{ {\tilde{D}\left( {{{\beta } _0^{(r)}}} \right) - D\left( {{{\beta } _0^{(r)}}} \right) } \right\} \mathop \rightarrow \limits ^p 0\) and \({n^{{{ - 1} / 2}}}\tilde{R}\left( {{\beta } _0 ^{(r)}} \right) = {n^{{{ - 1} / 2}}}R\left( {{\beta } _0 ^{(r)}} \right) + {o_p}\left( 1 \right) \), we have
Next, similar to the proof of Lemma 3, we can complete the proof of Theorem 1. \(\square \)
Proof of Theorem 2
Since \(\left\| {{\tilde{{\beta }}}- {{\beta } _0}} \right\| ={O_p}\left( {{n^{{{ - 1} / 2}}}} \right) \), Theorem 2 (i) follows from this result and Lemma 1. Based on Lemma 2, we have
By the definition of \({\hat{g}\left( {u;{\beta } } \right) }\) and \(\check{g} \left( u;{\beta } \right) \), choosing \({\varvec{c}}_n={\varvec{B}}_q\left( {u } \right) \) yields
Thus, when \({\beta }\) is a known constant \({\beta }_0\) or estimated to the order \({{O_p}\left( {{n^{{{ - 1} / 2}}}} \right) }\), we can complete the proof of Theorem 2 (ii). \(\square \)
Proof of Theorem 3
Let \({\varvec{\varSigma }}_{\tau i}=\left( {{\sigma _{ijj'}}} \right) _{j,j' = 1}^{{m_i}}\) and \({\hat{{\varvec{\varSigma }}}}_{\tau i}=\left( {{\hat{\sigma } _{ijj'}}} \right) _{j,j' = 1}^{{m_i}}\) for \(i=1\ldots ,n\). Based on the modified Cholesky decomposition, the diagonal elements of \({\hat{{\varvec{\varSigma }}}}_{\tau i}\) are \({\hat{\sigma } _{ijj}} = \hat{d}_{_{\tau ,ij}}^2 + \sum _{k = 1}^{j - 1} {\hat{l}_{\tau ,ijk}^2\hat{d}_{{\tau ,ik}}^2} \) for \(j=1,\ldots ,m_i\), and the elements under the diagonal are \({\hat{\sigma } _{ijk}} = {{\hat{l}}_{\tau ,ijk}}\hat{d}_{_{\tau ,ik}}^2 + \sum _{k' = 1}^{k - 1} {{{\hat{l}}_{\tau ,ijk'}}{{\hat{l}}_{\tau ,ikk'}}\hat{d}_{{\tau ,ik'}}^2} \) for \(j=2,\ldots ,m_i, k=1,\ldots ,j-1\). Similarly, the diagonal elements of \({\varvec{\varSigma }}_{\tau i}\) are \({\sigma _{ijj}} = d_{{\tau ,ij}}^2 + \sum _{k = 1}^{j - 1} { l_{\tau ,ijk}^2 d_{{\tau ,ik}}^2} \) for \(j=1,\ldots ,m_i\), and the elements under the diagonal are \({\sigma _{ijk}} = {{ l}_{\tau ,ijk}} d_{_{\tau ,ik}}^2 + \sum _{k' = 1}^{k - 1} {{{ l}_{\tau ,ijk'}}{{l}_{\tau ,ikk'}} d_{{\tau ,ik'}}^2} \) for \(j=2,\ldots ,m_i, k=1,\ldots ,j-1\). Since \(\left( {{\hat{{\gamma }}}_\tau ^T,{\hat{{\varvec{\lambda }}}}_\tau ^T} \right) ^T\) are \(\sqrt{n} \)-consistent estimators, together with \(\hat{d} _{\tau ,ij}^2= \exp \left( {\varvec{z}}_{ij}^T{\hat{{\varvec{\lambda }}}}_\tau \right) \) and \(\hat{l}_{\tau ,ijk}= {\varvec{w}}_{ijk}^T{{\hat{{\gamma }}}_\tau }\) for \(k<j=2,\ldots ,m_i\), we have
Therefore, for \(j=1,\ldots ,m_i\), we have
and
for \(j=2,\ldots ,m_i, k=1,\ldots ,j-1\). This completes the proof. \(\square \)
Proof of Theorem 4
Similar to the proof of Lemma 3, we have
where \({{\bar{{\varvec{S}}}}_{i}}\left( {\beta } \right) = {\left( {{\bar{S}_{i1}}\left( {\beta } \right) ,\ldots ,{\bar{S}_{i{m_i}}}\left( {\beta } \right) } \right) ^T}\) with \({\bar{S}_{ij}}\left( {\beta } \right) =\psi _{h\tau }\left( Y_{ij} - \bar{g}\left( {{\varvec{X}}_{ij}^T{{\beta }};{{\beta } }} \right) \right) \), \(\bar{R}\left( {{\beta } _0^{(r)}} \right) =\sum _{i = 1}^n {{\varvec{J}}_{{{\beta } ^{(r)}}}^T{\hat{{\varvec{X}}}}_i^T{\bar{{\varvec{G}}'}}\left( {{{\varvec{X}}_i}{{\beta } };{{\beta } }} \right) {\varvec{\varLambda }} _i{\hat{{\varvec{\varSigma }}}} _{\tau i}^{ - 1}{{\bar{{\varvec{S}}}}_{i}}\left( {\beta } \right) } \left| {_{{{\beta } ^{(r)}} = {\beta } _0^{(r)}}} \right. \) and
Using conditions (C4), (C6), and (C7), similar to Lemma 3 (k) of Horowitz (1998), we obtain
where \({{\varvec{S}}_{i}^*}\left( {\beta } \right) = {\left( {{S_{i1}^*}\left( {\beta } \right) ,\ldots ,{S_{i{m_i}}^*}\left( {\beta } \right) } \right) ^T}\) with \({S_{ij}^*}\left( {\beta } \right) =\psi _{\tau }\left( Y_{ij} - \bar{g}\left( {{\varvec{X}}_{ij}^T{{\beta } };{{\beta } }} \right) \right) \) and \(R^*\left( {{\beta } _0^{(r)}} \right) =\sum _{i = 1}^n {{\varvec{J}}_{{{\beta } ^{(r)}}}^T{\hat{{\varvec{X}}}}_i^T{\bar{{\varvec{G}}'}}\left( {{{\varvec{X}}_i}{{\beta } };{{\beta } }} \right) {\varvec{\varLambda }} _i{\hat{{\varvec{\varSigma }}}} _{\tau i}^{ - 1}{{\varvec{S}}_{i}^*}\left( {\beta } \right) } \left| {_{{{\beta } ^{(r)}} = {\beta } _0^{(r)}}} \right. \). Similar to the proof of Theorem 1, we have
where \( D^*\left( {{{\beta } _0^{(r)}}} \right) =\sum _{i = 1}^n {\varvec{J}}_{{{\beta } ^{(r)}}}^T {\hat{{\varvec{X}}}}_i^T{\bar{{\varvec{G}}'}}\left( {{{\varvec{X}}_i}{{\beta } };{{\beta } }} \right) {\varvec{\varLambda }} _i{\hat{{\varvec{\varSigma }}}} _{\tau i}^{ - 1}{{\varvec{\varLambda }} _i} {\bar{{\varvec{G}}'}}\left( {{{\varvec{X}}_i}{{\beta } };{{\beta } }} \right) {{\hat{{\varvec{X}}}}_i}{{\varvec{J}}_{{{\beta } ^{(r)}}}}\left| {_{{{\beta } ^{(r)}} = {\beta } _0^{(r)}}} \right. .\) By (22)–(24), we have
Similar to the proof of Lemma 1, we have
uniformly for any \(u\in [a,b]\). Because \({\varvec{S}}_{i}^*\left( {{{\beta } _0}} \right) \) are the independent random variables, together with (26), we have \(E\left( {{\varvec{S}}_{i}^*}\left( {{{\beta } _0}} \right) \right) =o\left( 1\right) \) and
By the use of the following property (see Lemma 2 in Li 2011), let \({\varvec{A}}_n\) be a sequence of random matrices converging to an invertible matrix \({\varvec{A}}\), and then \({\varvec{A}}_n^{ - 1} = {{\varvec{A}}^{ - 1}} - {{\varvec{A}}^{ - 1}}\left( {{{\varvec{A}}_n} - {\varvec{A}}} \right) {{\varvec{A}}^{ - 1}} + {O_p}\left( {{{\left\| {{{\varvec{A}}_n} -{\varvec{A}}} \right\| }^2}} \right) \). This together with Theorem 3, we have \({\hat{{\varvec{\varSigma }}}} _{\tau i}^{ - 1} - {\varvec{\varSigma }} _{\tau i}^{ - 1} = {O_p}\left( {{n^{{{ - 1} / 2}}}} \right) \) uniformly for all i. By the law of large numbers, (26), and the consistency of \({\hat{{\varvec{\varSigma }}}} _{\tau i}^{ - 1}\), we have
By the multivariate central limit theorem and the Slutsky’s theorem, together with (25), we can complete the proof. \(\square \)
Proof of Theorem 5
Similar to the proof of Theorem 2, together with the consistency \({\hat{{\varvec{\varSigma }}}} _{\tau i}^{ - 1} - {\varvec{\varSigma }} _{\tau i}^{ - 1} = {O_p}\left( {{n^{{{ - 1} / 2}}}} \right) \) of \({\varvec{ \varSigma }} _{\tau i}^{ - 1}\), when \({\beta }\) is a known constant \({\beta }_0\) or estimated to the order \({{O_p}\left( {{n^{{{ - 1} / 2}}}} \right) }\), we can complete the proof of Theorem 5. \(\square \)
About this article
Cite this article
Lv, J., Guo, C. Quantile estimations via modified Cholesky decomposition for longitudinal single-index models. Ann Inst Stat Math 71, 1163–1199 (2019). https://doi.org/10.1007/s10463-018-0673-x
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10463-018-0673-x