Skip to main content
Log in

An empirical likelihood method for quantile regression models with censored data

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

An estimation for censored quantile regression models, which is based on an inverse-censoring-probability weighting method, is studied in this paper, and asymptotic distribution of the parameter vector estimator is obtained. Based on the parameter estimation and asymptotic distribution of the estimator, an empirical likelihood inference method is proposed for censored quantile regression models and asymptotic property of empirical likelihood ratio is proved. Since the limiting distribution of the empirical likelihood ratio statistic is a mixture of chi-squared distributions, adjustment methods are also proposed to make the statistic converge to standard chi-squared distribution. The weighting scheme used in the parameter estimation is simple and the loss function is continuous and convex, and therefore, compared with empirical likelihood methods for quantile regression models with completely observed data, the methods proposed in this paper will not increase the computational complexity. This makes it especially useful for data with medium or high dimensional covariates. Simulation studies are developed to illustrate the performance of proposed methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiuqing Zhou.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Assumptions and Proofs

Appendix: Assumptions and Proofs

In this section, we will first state the three conditions used in the proposed theorems.

  1. C1.

    The distribution of \(\epsilon _i\) has unique \(\tau \)th quantile zero and its density function \(f(\cdot )\) is uniformly continuous at zero and satisfies \(f(0)> 0\).

  2. C2.

    \(\{X_1, \ldots , X_n\}\) are independent and identically distributed bounded random vectors.

  3. C3.

    The censoring variables \(\{C_i, i=1, \ldots , n\}\), which are independent of \(\{T_1, \ldots , T_n\}\), are independent and identically distributed with continuous distribution function G(t) which satisfies

$$\begin{aligned} \int _{-\infty }^{\tau _H}\frac{dF(t)}{\left[ 1-G(t)\right] ^2}<+\infty , \end{aligned}$$

where \(\tau _H=\sup \{t:[1-F(t)][1-G(t)]>0\}\).

Remark 1

The identical distribution of \(\{\epsilon _i, i=1, \ldots , n\}\) is not necessary in this paper and condition C1 can be replaced by

C1*. The distribution of \(\epsilon _i\) has unique \(\tau \)th quantile zero and its density function \(f_i(\cdot )\) is uniformly continuous at zero. Moreover, there is a positive number a such that \(\inf \limits _i f_i(0)\ge a> 0\) holds true.

The proposed theorems will still hold true under conditions C1*, C2 and C3.

In condition C2, we assume \(\{X_i\}\) are bounded as Ying et al. (1995) and Qin and Tsao (2003) did in their papers. Actually, The boundedness of \(\{X_i\}\) can certainly be true in practical applications, though it looks disgusting. The boundedness of \(\{X_i\}\) can be relaxed and replaced by some condition on the moment of \(X_i\).

Though both C1 and C2 can be relaxed, we apply them in this paper to simplify the notations in our proofs.

Remark 2

C3 is a condition about the behavior of G(t) and F(t) near \(\tau _H\), and it obviously holds true if \(\tau _F<\tau _G\). If \(\tau _G\le \tau _F\), since \(\frac{\delta _i}{1-G(Y_i)}\) will be large for any observed \(T_i\) near \(\tau _H\), from (3) and (4) we can see that an individual with observed \(T_i\) which is close to \(\tau _H\) will have large influence on the parameter estimator and confidence region. Therefore, C3 is an assumption to prevent too many such observation from appearing in the data, and therefore to ensure the asymptotic properties of proposed methods.

Then, before proving the theorems, we establish the following lemma, which is essential to the proofs of proposed theorems.

Lemma 1

Under conditions C1-C3 we have

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau ){\mathop {\longrightarrow }\limits ^{d}} N(\mathbf{0}, \varLambda _{1}-\varLambda _{2}) \end{aligned}$$

holds true.

Proof

Let \(Q(t)=\frac{1}{n}\sum \limits _{i=1}^n\left[ \tau -I(Y_i<\beta _\tau ^TX_i) \right] I(Y_i\le \min (C_i,t))X_i\). By the martingale integral representation for \(\frac{{\hat{G}}(t)-G(t)}{1-G(t)}\) (Gill 1983; Ying 1989; Zhou 1991) we have

$$\begin{aligned}&\frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\left[ {\hat{\eta }}_i(\beta _\tau ) -\eta _i(\beta _\tau )\right] \\&\quad =\sqrt{n}\int _{-\infty }^{+\infty }\frac{{\hat{G}}(t) -G(t)}{\left[ 1-{\hat{G}}(t)\right] \left[ 1-G(t)\right] }dQ(t)\\&\quad =\sqrt{n}\int _{-\infty }^{+\infty }\frac{1}{1-{\hat{G}}(t)} \int _{-\infty }^t\frac{1-{\hat{G}}(s-)}{1-G(s)} \frac{dM(s)}{\sum \limits _{i=1}^nI(Y_i\ge s)}dQ(t)\\&\quad =\frac{1}{\sqrt{n}}\int _{-\infty }^{+\infty }\frac{1}{1-G(t)} \int _{-\infty }^t\frac{dM(s)}{h(s)}dQ(t)+o_p(1)\\&\quad =\frac{1}{\sqrt{n}}\int _{-\infty }^{+\infty }\int _s^{+\infty } \frac{1}{1-G(t)}dQ(t)\frac{dM(s)}{h(s)}+o_p(1), \end{aligned}$$

where \(M(s)=\sum \nolimits _{i=1}^n\left[ I(Y_i\le s, \delta _i=0) -\int _{-\infty }^sI(Y_i\ge v)d\varLambda (v)\right] \) and \(\varLambda (\cdot )\) is the cumulative hazard function of censoring variable \(C_i\).

Since

$$\begin{aligned} \int _s^{+\infty }\frac{1}{1-G(t)}dQ(t)=\frac{1}{n}\sum \limits _{i=1}^n\frac{I(T_i\le C_i)}{1-G(T_i)}\left[ \tau -I(T_i<\beta _\tau ^TX_i)\right] I(T_i\ge s)X_i \end{aligned}$$

converge to q(s) uniformly in s, we have

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\left[ {\hat{\eta }}_i(\beta _\tau )-\eta _i(\beta _\tau )\right]= & {} \frac{1}{\sqrt{n}}\int _{-\infty }^{+\infty }\frac{q(s)}{h(s)}dM(s)+o_p(1)\\= & {} \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\xi _i+o_p(1), \end{aligned}$$

where

$$\begin{aligned} \xi _i=\int _{-\infty }^{+\infty }\frac{q(s)}{h(s)}d\left[ I(Y_i\le s, \delta _i=0)-I(Y_i\ge s)d\varLambda (s)\right] . \end{aligned}$$

Thus we have

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau )= & {} \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\eta _i(\beta _\tau ) +\frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\left[ {\hat{\eta }}_i(\beta _\tau ) -\eta _i(\beta _\tau )\right] \\= & {} \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\left[ \eta _i(\beta _\tau )+\xi _i\right] +o_p(1). \end{aligned}$$

Note that \(\{\eta _i(\beta _\tau )+\xi _i, i=1, \ldots , n\}\) are independent and identically distributed with mean zero. To obtain the limit distribution of \(\frac{1}{\sqrt{n}}\sum \nolimits _{i=1}^n{\hat{\eta }}_i(\beta _\tau )\) by the central limit theorem, we will calculate the variance matrix of \(\eta _i(\beta _\tau )+\xi _i\) in the following.

It is obvious that \(var(\eta _i(\beta _\tau ))=\varLambda _1\). For the variance of \(\xi _i\) we have

$$\begin{aligned} var(\xi _i)=\int _{-\infty }^{+\infty }\frac{q(s)q(s)^T}{h^2(s)} EI(Y_i\ge s)d\varLambda (s)=\varLambda _2. \end{aligned}$$

By direct calculation we deduce

$$\begin{aligned}&E\left[ \eta _i(\beta _\tau )\xi _i^T|X_i\right] \\= & {} \int _{-\infty }^{+\infty }\frac{X_iq(s)^T}{h(s)}\left\{ dE \left[ a_iI(Y_i\le s, \delta _i=0)|X_i\right] -E\left[ a_iI(Y_i\ge s)|X_i\right] d\varLambda (s)\right\} , \end{aligned}$$

where \(a_i=\frac{\delta _i}{1-G(Y_i)}\left[ \tau -I(T_i<\beta _\tau ^TX_i)\right] \). Noting the facts that \(E\eta _i(\beta _\tau )=0\), \(a_iI(Y_i\le s, \delta _i=0)=0\) and \(E\left[ a_iI(Y_i\ge s)X_i\right] =q(s)\) hold true, we obtain

$$\begin{aligned} cov(\eta _i(\beta _\tau ),\xi _i)= & {} E_{X_i}E\left[ \eta _i(\beta _\tau )\xi _i^T|X_i\right] \\= & {} -\int _{-\infty }^{+\infty }\frac{E\left[ a_iI(Y_i\ge s)X_i\right] q(s)^T}{h(s)}d\varLambda (s)\\= & {} -\int _{-\infty }^{+\infty }\frac{q(s)q(s)^T}{h(s)}d\varLambda (s)\\= & {} -\varLambda _2. \end{aligned}$$

Thus we have \(var\left[ \eta _i(\beta _\tau )+\xi _i\right] =\varLambda _1 -\varLambda _2\). Therefore, by central limit theorem we have Lemma 1 holds true. \(\square \)

Proof of Theorem 1

To prove Theorem 1, we define a new variable \(z=\beta -\beta _\tau \), and then rewrite the objective function of (3) as a function of z and denote it as \(U_n(z, {\hat{G}})\), that is,

$$\begin{aligned} U_n(z,{\hat{G}})=\frac{1}{n}\sum \limits _{i=1}^n \frac{\delta _i}{1-{\hat{G}}(Y_i)}\rho _\tau (Y_i-(z+\beta _\tau )^TX_i). \end{aligned}$$

Let

$$\begin{aligned} S_n(z)=S_{n1}+S_{n2}, \end{aligned}$$
(8)

where

$$\begin{aligned} S_{n1}= & {} \left( U_n(z, {\hat{G}})-U_n(z, G)\right) -\left( U_n(0, {\hat{G}})-U_n(0, G)\right) ,\\ S_{n2}= & {} U_n(z, G)-U_n(0, G). \end{aligned}$$

It is obvious that both \(S_n(z)\) and \(U_n(z,{\hat{G}})\) are minimized at the same point \({\hat{z}}_n={\hat{\beta }}_n-\beta _\tau \). Note that \(S_n(z)\) is a convex function of z. In the following, we will prove the asymptotic normality of \({\hat{z}}_n\) by analyzing properties of \(S_n(z)\).

Noting that

$$\begin{aligned} \rho _\tau (x-y)-\rho _\tau (x)=y\left[ I(x<0)-\tau \right] +(y-x)\left[ I(x-y<0)-I(x<0)\right] \end{aligned}$$
(9)

holds for any \(x, y\in R\), therefore we have

$$\begin{aligned} S_{n2}= & {} U_n(z, G)-U_n(0, G)\\= & {} \frac{1}{n}\sum \limits _{i=1}^n\frac{\delta _i}{1-G(Y_i)}\left[ \rho _\tau (Y_i-z^TX_i-\beta _\tau ^TX_i)-\rho _\tau (Y_i-\beta _\tau ^TX_i)\right] \\= & {} S_{n2}^1+S_{n2}^2, \end{aligned}$$

where

$$\begin{aligned} S_{n2}^1= & {} \frac{1}{n}\sum \limits _{i=1}^n\frac{\delta _i}{1-G(Y_i)}z^TX_i \left[ I(\epsilon _i<0)-\tau \right] \\= & {} -\frac{1}{\sqrt{n}}z\cdot \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\eta _i(\beta _\tau ),\\ S_{n2}^2= & {} \frac{1}{n}\sum \limits _{i=1}^n\frac{\delta _i}{1-G(Y_i)} (z^TX_i-\epsilon _i)\left[ I(\epsilon _i<z^TX_i)-I(\epsilon _i<0)\right] \\= & {} \frac{1}{2}f(0)z^TE\left( X_iX_i^T\right) z+o_p(||z||^2)+o_p(1). \end{aligned}$$

That is

$$\begin{aligned} S_{n2}=\frac{1}{2}f(0)z^TE\left( X_iX_i^T\right) z-\frac{1}{\sqrt{n}}z \cdot \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n\eta _i(\beta _\tau )+o_p(||z||^2)+o_p(1). \end{aligned}$$
(10)

Using (9) again we have

$$\begin{aligned} S_{n1}(z)= & {} \left( U_n(z, {\hat{G}})-U_n(z, G)\right) -\left( U_n(0, {\hat{G}})-U_n(0, G)\right) \\= & {} \frac{1}{n}\sum \limits _{i=1}^n\frac{\delta _i\left( {\hat{G}}(Y_i) -G(Y_i)\right) }{\left( 1-G(Y_i)\right) \left( 1-{\hat{G}}(Y_i)\right) } \left[ \rho _\tau (Y_i-z^TX_i-\beta _\tau ^TX_i)-\rho _\tau (Y_i-\beta _\tau ^TX_i)\right] \\= & {} S_{n1}^1+S_{n1}^2, \end{aligned}$$

where

$$\begin{aligned} S_{n1}^1= & {} \frac{1}{n}\sum \limits _{i=1}^n\frac{\delta _i\left( {\hat{G}}(Y_i) -G(Y_i)\right) }{\left( 1-G(Y_i)\right) \left( 1-{\hat{G}}(Y_i)\right) }z^TX_i \left[ I(\epsilon _i<0)-\tau \right] \nonumber \\= & {} -\frac{1}{\sqrt{n}}z\cdot \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n \left[ {\hat{\eta }}_i(\beta _\tau )-\eta _i(\beta _\tau )\right] , \end{aligned}$$
(11)

and

$$\begin{aligned} S_{n1}^2=\frac{1}{n}\sum \limits _{i=1}^n\frac{\delta _i\left( {\hat{G}}(Y_i) -G(Y_i)\right) }{\left( 1-G(Y_i)\right) \left( 1-{\hat{G}}(Y_i)\right) } \left( z^TX_i-\epsilon _i\right) \left[ I(\epsilon _i<z^TX_i)-I(\epsilon _i<0)\right] . \end{aligned}$$

Next, we will prove \(S_{n1}^2=o_p(||z||^2)+o_p(1)\) holds true. To do this, we will first define \({\widetilde{Q}}(t)=\frac{1}{n}\sum \nolimits _{i=1}^n\left( z^TX_i-\epsilon _i\right) \left[ I(\epsilon _i<z^TX_i)-I(\epsilon _i<0)\right] I(T_i\le \min (C_i,t))\) and then apply the martingale integral representation for \(\frac{{\hat{G}}(t)-G(t)}{1-G(t)}\) again. As a result we have

$$\begin{aligned} S_{n1}^2= & {} \int _{-\infty }^{+\infty }\frac{\left( {\hat{G}}(t)-G(t)\right) }{\left( 1-G(t)\right) \left( 1-{\hat{G}}(t)\right) }d{\widetilde{Q}}(t)\\= & {} \frac{1}{n}\int _{-\infty }^{+\infty }\frac{1}{1-G(t)}\int _{-\infty }^t\frac{\sum \limits _{j=1}^ndI(Y_j\le s,\delta _j=0)-\sum \limits _{j=1}^nI(Y_j\ge s)d\varLambda (s)}{h(s)}d{\widetilde{Q}}(t)+o_p(1)\\= & {} \frac{1}{n}\sum \limits _{j=1}^n\int _{-\infty }^{+\infty }\frac{dI(Y_j\le s,\delta _j=0)-I(Y_j\ge s)d\varLambda (s)}{h(s)}\int _s^{+\infty }\frac{1}{1-G(t)}d{\widetilde{Q}}(t)+o_p(1). \end{aligned}$$

To calculate \(S_{n1}^2\), on one hand, since

$$\begin{aligned}&\int _s^{+\infty }\frac{1}{1-G(t)}d{\widetilde{Q}}(t)\\= & {} \frac{1}{n}\sum \limits _{i=1}^n\frac{\delta _i}{\left( 1-G(Y_i)\right) } \left( \epsilon _i-z^TX_i\right) \left[ I(\epsilon _i<z^TX_i)-I(\epsilon _i<0)\right] I(T_i\ge s) \end{aligned}$$

is the mean of independent and identically random variables, by the law of large numbers we have

$$\begin{aligned}&\left| \int _s^{+\infty }\frac{1}{1-G(t)}d{\widetilde{Q}}(t)\right| \\&\quad =E\left\{ \frac{\delta _i}{\left( 1-G(Y_i)\right) }\left| \left( z^TX_i-\epsilon _i\right) \left[ I(\epsilon _i<z^TX_i) -I(\epsilon _i<0)\right] I(T_i\ge s)\right| \right\} +o_p(1)\\&\quad =E\left| \left( z^TX_i-\epsilon _i\right) \left[ I(\epsilon _i<z^TX_i)-I(\epsilon _i<0)\right] I(T_i\ge s)\right| +o_p(1)\\&\quad \le E\left[ \left| z^TX_i-\epsilon _i\right| I(\epsilon \in A_i)\right] +o_p(1) \end{aligned}$$

holds uniformly in s, where \(A_i\) is the interval \((0, z^TX_i]\) or \((z^TX_i, 0]\) depends on \(z^TX_i>0\) or not. Therefore we have

$$\begin{aligned} \int _s^{+\infty }\frac{1}{1-G(t)}d{\widetilde{Q}}(t)=O_p(||z||^2)+o_p(1). \end{aligned}$$

On the other hand, noting that \(\frac{1}{n}\sum \limits _{j=1}^n \int _{-\infty }^{+\infty }\frac{dI(Y_j\le s,\delta _j=0)-I(Y_j\ge s)d\varLambda (s)}{h(s)}\) is also the mean of independent and identically random variables and

$$\begin{aligned} E\left[ \int _{-\infty }^{+\infty }\frac{dI(Y_j\le s,\delta _j=0) -I(Y_j\ge s)d\varLambda (s)}{h(s)}\right] =0 \end{aligned}$$

holds true, we have

$$\begin{aligned} \frac{1}{n}\sum \limits _{j=1}^n\int _{-\infty }^{+\infty } \frac{dI(Y_j\le s,\delta _j=0)-I(Y_j\ge s)d\varLambda (s)}{h(s)}=o_p(1). \end{aligned}$$

Therefore \(S_{n1}^2=o_p(||z||^2)+o_p(1)\) holds true. Combining this with Eq. (11) we know

$$\begin{aligned} S_{n1}=-\frac{1}{\sqrt{n}}z\cdot \frac{1}{\sqrt{n}} \sum \limits _{i=1}^n\left[ {\hat{\eta }}_i(\beta _\tau )-\eta _i(\beta _\tau )\right] +o_p(||z||^2)+o_p(1). \end{aligned}$$
(12)

By Eqs. (8), (10) and (12) together we obtain

$$\begin{aligned} S_n(z)=z^T\left[ \frac{1}{2}f(0)E\left( X_iX_i^T\right) +o_p(1)\right] z-\frac{1}{\sqrt{n}}z\cdot \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau )+o_p(1). \end{aligned}$$

This means the unique minimal point of \(S_n(z)\) is

$$\begin{aligned} {\hat{z}}_n=\frac{1}{\sqrt{n}}\left[ f(0)E(X_iX_i^T)+o_p(1)\right] ^{-1}\cdot \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau ). \end{aligned}$$

Combining this with Lemma 1 we have Theorem 1 proved. \(\square \)

To prove Theorem 2 , we need two more Lemmas as stated in the following.

Lemma 2

Under the conditions C1-C3, we have

$$\begin{aligned} \frac{1}{n}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau ){\hat{\eta }}_i(\beta _\tau )^T =\frac{1}{n}\sum \limits _{i=1}^n\eta _i(\beta _\tau )\eta _i(\beta _\tau )^T+o_p(1). \end{aligned}$$

Proof

By the factors (Zhou 1991)

$$\begin{aligned}&\sup \limits _{-\infty<t\le Y_{(n)}}\left| {\hat{G}}(t)-G(t)\right| =o_p(1),\\&\sup \limits _{-\infty <t\le Y_{(n)}}\left| \frac{1-G(t)}{1-{\hat{G}}(t)}\right| =O_p(1), \end{aligned}$$

for any \(\alpha \in R^k\) satisfying \(||\alpha ||=1\), we have that

$$\begin{aligned}&\frac{1}{n}\sum \limits _{i=1}^n\alpha ^T\left[ {\hat{\eta }}_i(\beta _\tau ) {\hat{\eta }}_i(\beta _\tau )^T-\eta _i(\beta _\tau )\eta _i(\beta _\tau )^T\right] \alpha \\&=\frac{1}{n}\sum \limits _{i=1}^n\frac{\left[ 1-G(T_i)\right] ^2 -\left[ 1-{\hat{G}}(T_i)\right] ^2}{\left[ 1-{\hat{G}}(T_i)\right] ^2} \left[ \frac{\delta _i\left( \tau -I(\epsilon _i<0)\right) ^2}{\left[ 1-G(T_i) \right] ^2}\alpha ^TX_iX_i^T\alpha \right] \\&\quad \le o_p(1)\frac{1}{n}\sum \limits _{i=1}^n\left[ \frac{\delta _i \left( \tau -I(\epsilon _i<0)\right) ^2}{\left[ 1-G(T_i)\right] ^3}\alpha ^TX_iX_i^T\alpha \right] . \end{aligned}$$

Since

$$\begin{aligned} \frac{1}{n}\sum \limits _{i=1}^n\left[ \frac{\delta _i\left( \tau -I(\epsilon _i<0) \right) ^2}{\left[ 1-G(T_i)\right] ^3}\alpha ^TX_iX_i^T\alpha \right] \le C_0\frac{1}{n} \sum \limits _{i=1}^n\frac{\delta _i}{\left[ 1-G(T_i)\right] ^3} \end{aligned}$$

holds true for some positive \(C_0\) which is large enough, by condition C3 we have \(\frac{1}{n}\sum \limits _{i=1}^n\alpha ^T\left[ {\hat{\eta }}_i(\beta _\tau ) {\hat{\eta }}_i(\beta _\tau )^T-\eta _i(\beta _\tau )\eta _i(\beta _\tau )^T\right] \alpha =o_p(1)\) holds true. This complete the proof of Lemma 2. \(\square \)

Lemma 3

Under conditions C1-C3 we have

$$\begin{aligned}&(a)&\ \max \limits _{1\le i\le n}||{\hat{\eta }}_i(\beta _\tau )||=o_p(n^{1/2}); \qquad \quad (b) \ ||\lambda ||=O_p(n^{-\frac{1}{2}});\\&(c)&\ \lambda =\left[ \frac{1}{n}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau ) {\hat{\eta }}_i(\beta _\tau )^T\right] ^{-1}\left[ \frac{1}{n}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau )\right] +o_p(n^{-\frac{1}{2}}). \end{aligned}$$

Proof

(a) holds true by Lemma 3 of Wu (1981). (b) and (c) follow directly from Lemma 1, Lemma 2, and the same arguments used in Owen (1991). We will omit details of the proof. \(\square \)

Proof of Theorem 2

By Lemma 3 we know \(\max \limits _{1\le i\le n}|\lambda ^T{\hat{\eta }}_i(\beta _\tau )|=o_p(1)\) holds true, thus

$$\begin{aligned} -2\log R_n(\beta _\tau )= & {} 2\sum \limits _{i=1}^n\log \left[ 1+\lambda ^T{\hat{\eta }}_i(\beta _\tau )\right] \\= & {} 2\sum \limits _{i=1}^n\lambda ^T{\hat{\eta }}_i(\beta _\tau ) -\sum \limits _{i=1}^n\left[ \lambda ^T{\hat{\eta }}_i(\beta _\tau )\right] ^2+R_n, \end{aligned}$$

where

$$\begin{aligned} |R_n|\le C\left| \sum \limits _{i=1}^n\left[ \lambda ^T{\hat{\eta }}_i(\beta _\tau )\right] ^3\right| \end{aligned}$$

for some positive C. By Lemma 2 and Lemma 3 we obtain

$$\begin{aligned} |R_n|\le o_p(1)\left| \sum \limits _{i=1}^n\left[ \lambda ^T{\hat{\eta }}_i(\beta _\tau )\right] ^2\right| = o_p(n)\lambda ^T\left[ \frac{1}{n}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau ){\hat{\eta }}_i(\beta _\tau )^T\right] \lambda =o_p(1). \end{aligned}$$

Therefore,

$$\begin{aligned} -2\log R_n(\beta _\tau )= & {} 2\sum \limits _{i=1}^n\lambda ^T{\hat{\eta }}_i(\beta _\tau )-\sum \limits _{i=1}^n\left[ \lambda ^T{\hat{\eta }}_i(\beta _\tau )\right] ^2+o_p(1)\\= & {} \left[ \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau )\right] ^T\left[ \frac{1}{n}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau ){\hat{\eta }}_i(\beta _\tau )^T\right] ^{-1}\left[ \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau )\right] +o_p(1) \end{aligned}$$

holds true. Combining above equation with Lemma 1 and Lemma 2 we have Theorem 2 proved. \(\square \)

Proof of Theorem 3

Let \(\varLambda _1(\beta )\) and \(\varLambda _2(\beta )\) stand respectively for the right hands of (5) and (6) after replacing \({\hat{G}}(\cdot )\) by \(G(\cdot )\). By properties of \({\hat{G}}\) and a proof similar to that used for Lemma 2 we can show

$$\begin{aligned}&\left( {\hat{\varLambda }}_1({\hat{\beta }}_n)-{\hat{\varLambda }}_1(\beta _\tau )\right) -\left( \varLambda _1({\hat{\beta }}_n)-\varLambda _1(\beta _\tau )\right) =o_p(1), \end{aligned}$$
(13)
$$\begin{aligned}&{\hat{\varLambda }}_1(\beta _\tau )-\varLambda _1(\beta _\tau )=o_p(1). \end{aligned}$$
(14)

Since \({\hat{\beta }}_n=\beta _\tau +O_p(n^{-1/2})\) holds by Theorem 1 , we have

$$\begin{aligned}&\varLambda _1({\hat{\beta }}_n)-\varLambda _1(\beta _\tau )\nonumber \\&\quad =\frac{1}{n}\sum \limits _{i=1}^n\frac{\delta _iX_iX_i^T}{\left[ 1-G(Y_i)\right] ^2} \left\{ \left[ \tau -I\left( \epsilon _i<O_p\left( n^{-1/2}\right) \right) \right] ^2 -\left[ \tau -I(\epsilon _i<0)\right] ^2\right\} \nonumber \\&\quad =o_p(1). \end{aligned}$$
(15)

Noting that \(\varLambda _1(\beta _\tau )=\varLambda _1+o_p(1)\), combining this fact with Eqs. (13), (14) and (15) together we have \({\hat{\varLambda }}_1({\hat{\beta }}_n)=\varLambda _1+o_p(1)\) holds true.

To prove the consistency of \({\hat{\varLambda }}_2({\hat{\beta }}_n)\), we first define

$$\begin{aligned} U({\hat{G}},\beta ,s)=\frac{1}{n}\sum \limits _{j=1}^n\frac{\delta _j}{1-{\hat{G}}(Y_j)} \left[ \tau -I(Y_j<\beta ^T X_j)\right] I(Y_j\ge s)X_j. \end{aligned}$$

Then we have

$$\begin{aligned} \left| U({\hat{G}},{\hat{\beta }}_n,s)-U(G,{\hat{\beta }}_n,s)\right| \le O_p(1)\cdot \frac{1}{n}\sum \limits _{j=1}^n \frac{\delta _j\left| G(Y_j)-{\hat{G}}(Y_j)\right| }{\left[ 1-{\hat{G}}(Y_j)\right] \left[ 1-G(Y_j)\right] } \end{aligned}$$

holds for all s. Therefore by properties of \({\hat{G}}\) and condition C3 we have

$$\begin{aligned} U({\hat{G}},{\hat{\beta }}_n,s)-U(G,{\hat{\beta }}_n,s)=o_p(1) \end{aligned}$$
(16)

holds uniformly in s. On the other hand

$$\begin{aligned}&\left| U(G,{\hat{\beta }}_n,s)-U(G,\beta _\tau ,s)\right| \\&\quad \le O_p(1)\cdot \frac{1}{n}\sum \limits _{j=1}^n \frac{\delta _j}{1-G(Y_j)}\left| I(\epsilon _i<0)-I(\epsilon _i<O_p(n^{-1/2}))\right| \end{aligned}$$

holds for all s. Then by condition C1 and C3 we have

$$\begin{aligned} U(G,{\hat{\beta }}_n,s)-U(G,\beta _\tau ,s)=o_p(1) \end{aligned}$$
(17)

holds uniformly in s. By Eqs. (16), (17) and the fact that \(U(G,\beta _\tau ,s)=q(s)+o_p(1)\) we have

$$\begin{aligned} U({\hat{G}},{\hat{\beta }}_n,s)=q(s)+o_p(1) \end{aligned}$$
(18)

holds uniformly in s. Noting that \(\frac{1}{n}\sum \nolimits _{j=1}^nI(Y_j\ge s)=h(s)+o_p(1)\) holds uniformly, by Eq. (18) and the law of large numbers we have

$$\begin{aligned} {\hat{\varLambda }}_2({\hat{\beta }}_n)= & {} \frac{1}{n}\sum \limits _{i=1}^n(1-\delta _i)\frac{q(Y_i)q(Y_i)^T}{h^2(Y_i)}+o_p(1)\\= & {} \varLambda _2+o_p(1) \end{aligned}$$

holds true. This complete the proof of Theorem 3. \(\square \)

Proof of Theorem 4

By the proof of Theorem 3 we have both \({\hat{\varLambda }}_1(\beta _\tau )=\varLambda _1+o_p(1)\) and \({\hat{\varLambda }}_2(\beta _\tau )=\varLambda _2+o_p(1)\) hold true obviously. Therefore by Lemma 1 and the proofs of Theorem 2 we obtain

$$\begin{aligned}&-2\gamma _n(\beta _\tau )\cdot \log R_n(\beta _\tau )\\&\quad =\left[ \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau )\right] \left[ {\hat{\varLambda }}_1(\beta _\tau )-{\hat{\varLambda }}_2(\beta _\tau )\right] ^{-1}\left[ \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau )\right] +o_p(1)\\&\quad =\left[ \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau )\right] \left( \varLambda _1-\varLambda _2\right) ^{-1}\left[ \frac{1}{\sqrt{n}}\sum \limits _{i=1}^n{\hat{\eta }}_i(\beta _\tau )\right] +o_p(1)\\= & {} \chi ^2_k+o_p(1). \end{aligned}$$

This complete the proof of Theorem 4. \(\square \)

Theorem 5 is true obviously by the proof of Theorem 4.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, Q., Zhou, X., Feng, Y. et al. An empirical likelihood method for quantile regression models with censored data. Metrika 84, 75–96 (2021). https://doi.org/10.1007/s00184-020-00775-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-020-00775-1

Keywords

Navigation