Skip to main content
Log in

Jackknife empirical likelihood inference for the accelerated failure time model

  • Original Paper
  • Published:
TEST Aims and scope Submit manuscript

Abstract

Accelerated failure time (AFT) model is a useful semi-parametric model under right censoring, which is an alternative to the commonly used proportional hazards model. Making statistical inference for the AFT model has attracted considerable attention. However, it is difficult to compute the estimators of regression parameters due to the lack of smoothness for rank-based estimating equations. Brown and Wang (Stat Med 26(4):828–836, 2007) used an induced smoothing approach, which smooths the estimating functions to obtain point and variance estimators. In this paper, a more computationally efficient method called jackknife empirical likelihood (JEL) is proposed to make inference for the accelerated failure time model without computing the limiting variance. Results from extensive simulation suggest that the JEL method outperforms the traditional normal approximation method in most cases. Subsequently, two real data sets are analyzed for illustration of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Brown B, Wang Y-G (2005) Standard errors and covariance matrices for smoothed rank estimators. Biometrika 92(1):149–158

    Article  MathSciNet  MATH  Google Scholar 

  • Brown B, Wang Y-G (2007) Induced smoothing for rank regression with censored survival times. Stat Med 26(4):828–836

    Article  MathSciNet  Google Scholar 

  • Buckley J, James I (1979) Linear regression with censored data. Biometrika 66(3):429–436

    Article  MATH  Google Scholar 

  • Chen B, Pan G, Yang Q, Zhou W (2015) Large dimensional empirical likelihood. Stat Sin 25:1659–1677

    MathSciNet  MATH  Google Scholar 

  • Chiou S, Kang S, Yan J (2015) Rank-based estimating equations with general weight for accelerated failure time models: an induced smoothing approach. Stat Med 34(9):1495–1510

    Article  MathSciNet  Google Scholar 

  • Chiou SH, Kang S, Yan J (2014a) Fast accelerated failure time modeling for case-cohort data. Stat Comput 24(4):559–568

    Article  MathSciNet  MATH  Google Scholar 

  • Chiou SH, Kang S, Yan J et al (2014b) Fitting accelerated failure time models in routine survival analysis with R package aftgee. J Stat Softw 61(11):1–23

    Article  Google Scholar 

  • Cox DR (1972) Regression models and life-tables. J Roy Stat Soc Ser B (Methodol) 34(2):187–220

    MathSciNet  MATH  Google Scholar 

  • D’angio G J, Breslow N, Beckwith J B, Evans A, Baum E, Delorimier A, Fernbach D, Hrabovsky E, Jones B, Kelalis P et al (1989) Treatment of wilms’ tumor. results of the third national wilms’ tumor study. Cancer 64(2):349–360

    Article  Google Scholar 

  • Fygenson M, Ritov Y (1994) Monotone estimating equations for censored data. Ann Stat 22(2):732–746

    Article  MathSciNet  MATH  Google Scholar 

  • Heller G (2007) Smoothed rank regression with censored data. J Am Stat Assoc 102(478):552–559

    Article  MathSciNet  MATH  Google Scholar 

  • Jin Z, Lin D, Wei L, Ying Z (2003) Rank-based inference for the accelerated failure time model. Biometrika 90(2):341–353

    Article  MathSciNet  MATH  Google Scholar 

  • Jing B-Y, Yuan J, Zhou W (2009) Jackknife empirical likelihood. J Am Stat Assoc 104(487):1224–1232

    Article  MathSciNet  MATH  Google Scholar 

  • Johnson LM, Strawderman RL (2009) Induced smoothing for the semiparametric accelerated failure time model: asymptotics and extensions to clustered data. Biometrika 96(3):577–590

    Article  MathSciNet  MATH  Google Scholar 

  • Kalbfleisch JD, Prentice RL (1980) The statistical analysis of failure time data. Wiley, New York

    MATH  Google Scholar 

  • Li Z, Xu J, Zhou W (2016) On nonsmooth estimating functions via jackknife empirical likelihood. Scand J Stat 43(1):49–69

    Article  MathSciNet  MATH  Google Scholar 

  • Lu W, Liang Y (2006) Empirical likelihood inference for linear transformation models. J Multivar Anal 97(7):1586–1599

    Article  MathSciNet  MATH  Google Scholar 

  • McGilchrist C, Aisbett C (1991) Regression with frailty in survival analysis. Biometrics 47(2):461–466

    Article  Google Scholar 

  • Owen AB (1988) Empirical likelihood ratio confidence intervals for a single functional. Biometrika 75(2):237–249

    Article  MathSciNet  MATH  Google Scholar 

  • Owen AB (1990) Empirical likelihood ratio confidence regions. Ann Stat 18(1):90–120

    Article  MathSciNet  MATH  Google Scholar 

  • Owen AB (2001) Empirical likelihood. CRC Press, London

    Book  MATH  Google Scholar 

  • Prentice RL (1978) Linear rank tests with right censored data. Biometrika 65(1):167–179

    Article  MathSciNet  MATH  Google Scholar 

  • Qin J, Lawless J (1994) Empirical likelihood and general estimating equations. Ann Stat 22:300–325

    Article  MathSciNet  MATH  Google Scholar 

  • Thomas DR, Grunkemeier GL (1975) Confidence interval estimation of survival probabilities for censored data. J Am Stat Assoc 70(352):865–871

    Article  MathSciNet  MATH  Google Scholar 

  • Tsiatis AA (1990) Estimating regression parameters using linear rank tests for censored data. Ann Stat 18(1):354–372

    Article  MathSciNet  MATH  Google Scholar 

  • Wang Y-G, Fu L (2011) Rank regression for accelerated failure time model with clustered and censored data. Comput Stat Data Anal 55(7):2334–2343

    Article  MathSciNet  MATH  Google Scholar 

  • Wei L-J, Ying Z, Lin D (1990) Linear regression analysis of censored survival data based on rank tests. Biometrika 77(4):845–851

    Article  MathSciNet  Google Scholar 

  • Wilks SS (1938) The large-sample distribution of the likelihood ratio for testing composite hypotheses. Ann Math Stat 9(1):60–62

    Article  MATH  Google Scholar 

  • Yang H, Liu S, Zhao Y (2016) Jackknife empirical likelihood for linear transformation models with right censoring. Ann Inst Stat Math 68(5):1095–1109

    Article  MathSciNet  MATH  Google Scholar 

  • Yang H, Zhao Y (2012) New empirical likelihood inference for linear transformation models. J Stat Plan Inference 142:1659–1668

    Article  MathSciNet  MATH  Google Scholar 

  • Ying Z (1993) A large sample study of rank estimation for censored regression data. Ann Stat 21(1):76–99

    Article  MathSciNet  MATH  Google Scholar 

  • Yu W, Sun Y, Zheng M (2011) Empirical likelihood method for linear transformation models. Ann Inst Stat Math 63(2):331–346

    Article  MathSciNet  MATH  Google Scholar 

  • Zeng D, Lin D (2008) Efficient resampling methods for nonsmooth estimating functions. Biostatistics 9(2):355–363

    Article  MATH  Google Scholar 

  • Zhao Y, Meng X, Yang H (2015) Jackknife empirical likelihood inference for the mean absolute deviation. Comput Stat Data Anal 91:92–101

    Article  MathSciNet  MATH  Google Scholar 

  • Zhao Y, Yang S (2012) Empirical likelihood confidence intervals for regression parameters of the survival rate. J Nonparametr Stat 24(1):59–70

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We would like to thank the Editor-in-Chief and two reviewers for their excellent comments, which have helped to improve the manuscript significantly. Yichuan Zhao acknowledges the support from both NSF and NSA Grants.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yichuan Zhao.

Appendix A: Proofs of Theorems

Appendix A: Proofs of Theorems

To derive the asymptotic properties of \(l(\beta _0)\) and \(l^*(\beta _{10})\), we assume some regularity conditions hold.

  1. (C.1)

    X is bounded, that is, \(P(\left\| X \right\| \le M) = 1\) for some \(0<M<\infty \).

  2. (C.2)

    The conditional distribution \({F_{{e_1}(\beta )\left| {{X_1}} \right. }}(t)\) of \({e_1}(\beta ) = {Y_1} - {\beta ^T}{X_1}\) given \(X_1\) is twice continuously differentiable in t for all X.

  3. (C.3)

    For any X, the conditional density function \({{F'}_{{e_1}(\beta )\left| {{X_1}} \right. }}(t) = {f_{{e_1}(\beta )\left| {{X_1}} \right. }}(t) > 0\) for t in a neighborhood of 0.

Firstly, we re-express the smoothed rank estimating function \({{ S}_n}(\beta )\) in (2.2) as a U-statistic with a symmetric kernel function.

$$\begin{aligned} {{S}_n}(\beta )&= \sum \limits _{i = 1}^n {\sum \limits _{j = 1}^n {{\Delta _i}({X_i} - {X_j})\Phi \left[ {\frac{{{e_j}(\beta ) - {e_i}(\beta )}}{{{r_{ij}}}}} \right] } } \\&= \sum \limits _{1 \le i< j \le n} {{\Delta _i}({X_i} - {X_j})\Phi \left[ {\frac{{{e_j}(\beta ) - {e_i}(\beta )}}{{{r_{ij}}}}} \right] }\\&\quad +\, \sum \limits _{1 \le j< i \le n} {{\Delta _i}({X_i} - {X_j})\Phi \left[ {\frac{{{e_j}(\beta ) - {e_i}(\beta )}}{{{r_{ij}}}}} \right] } \\&= \sum \limits _{1 \le i< j \le n} {{\Delta _i}({X_i} - {X_j})\Phi \left[ {\frac{{{e_j}(\beta ) - {e_i}(\beta )}}{{{r_{ij}}}}} \right] }\\&\quad +\, \sum \limits _{1 \le i < j \le n} {{\Delta _j}({X_j} - {X_i})\Phi \left[ {\frac{{{e_i}(\beta ) - {e_j}(\beta )}}{{{r_{ji}}}}} \right] }, \end{aligned}$$

because of \(r_{ij}^2 = {({X_i} - {X_j})^T}({X_i} - {X_j})/n\), one has that \({r_{ij}} = {r_{ji}}\),

$$\begin{aligned} {{ S}_n}(\beta )&= \sum \limits _{1 \le i< j \le n} {({X_i} - {X_j})\left\{ {{\Delta _i}\Phi \left[ {\frac{{{e_j}(\beta ) - {e_i}(\beta )}}{{{r_{ij}}}}} \right] - {\Delta _j}\Phi \left[ {\frac{{{e_i}(\beta ) - {e_j}(\beta )}}{{{r_{ij}}}}} \right] } \right\} } \\&= \left( {\begin{array}{*{20}{c}} n\\ 2 \end{array}} \right) \left[ {{{\left( {\begin{array}{*{20}{c}} n\\ 2 \end{array}} \right) }^{ - 1}}\sum \limits _{1 \le i < j \le n} {K({Z_i},{Z_j};\beta )} } \right] \\&\equiv \left( {\begin{array}{*{20}{c}} n\\ 2 \end{array}} \right) S_n^*(\beta ), \end{aligned}$$

where \(S_n^*(\beta )\) is a U-statistic of degree 2

$$\begin{aligned} S_n^*(\beta ) = {\left( {\begin{array}{*{20}{c}} n\\ 2 \end{array}} \right) ^{ - 1}}\sum \limits _{1 \le i < j \le n} {K({Z_i},{Z_j};\beta )} \equiv {U_n}(\beta ), \end{aligned}$$

with the kernel function

$$\begin{aligned} K({Z_i},{Z_j};\beta ) = ({X_i} - {X_j})\left\{ {{\Delta _i}\Phi \left[ {\frac{{{e_j}(\beta ) - {e_i}(\beta )}}{{{r_{ij}}}}} \right] - {\Delta _j}\Phi \left[ {\frac{{{e_i}(\beta ) - {e_j}(\beta )}}{{{r_{ij}}}}} \right] } \right\} . \end{aligned}$$

Similarly, we can also derive \({{\tilde{S}}_n}(\beta )\) in (2.1) as a U-statistic with a symmetric kernel function, that is,

$$\begin{aligned} {{\tilde{S}}_n}(\beta )&= \sum \limits _{i = 1}^n {\sum \limits _{j = 1}^n {{\Delta _i}({X_i} - {X_j})I[{e_j}(\beta ) \ge {e_i}(\beta )]} ,} \\&= \sum \limits _{1 \le i< j \le n} {{\Delta _i}({X_i} - {X_j})I[{e_j}(\beta ) \ge {e_i}(\beta )]} + \sum \limits _{1 \le j< i \le n} {{\Delta _i}({X_i} - {X_j})I[{e_j}(\beta ) \ge {e_i}(\beta )]} \\&= \sum \limits _{1 \le i< j \le n} {{\Delta _i}({X_i} - {X_j})I[{e_j}(\beta ) \ge {e_i}(\beta )]} + \sum \limits _{1 \le i< j \le n} {{\Delta _j}({X_j} - {X_i})I[{e_i}(\beta ) \ge {e_j}(\beta )]} \\&= \sum \limits _{1 \le i< j \le n} {({X_i} - {X_j})\left\{ {{\Delta _i}I[{e_j}(\beta ) \ge {e_i}(\beta )] - {\Delta _j}I[{e_i}(\beta ) \ge {e_j}(\beta )]} \right\} } \\&= \left( {\begin{array}{*{20}{c}} n\\ 2 \end{array}} \right) \left[ {{{\left( {\begin{array}{*{20}{c}} n\\ 2 \end{array}} \right) }^{ - 1}}\sum \limits _{1 \le i < j \le n} {H({Z_i},{Z_j};\beta )} } \right] \\&\equiv \left( {\begin{array}{*{20}{c}} n\\ 2 \end{array}} \right) {{W}_n}(\beta ), \end{aligned}$$

with the kernel function

$$\begin{aligned} H({Z_i},{Z_j};\beta ) = ({X_i} - {X_j})\left\{ {{\Delta _i}I[{e_j}(\beta ) \ge {e_i}(\beta )] - {\Delta _j}I[{e_i}(\beta ) \ge {e_j}(\beta )]} \right\} . \end{aligned}$$

Fygenson and Ritov (1994) pointed out that when evaluated that \({W_n}(\beta _0 )\) is asymptotically normal and has expectation zero. Furthermore, by (A.7) in Appendix of Johnson and Strawderman (2009), we have the asymptotically equivalence of \({{ U}_n}(\beta _0 )\) to \({W_n}(\beta _0 )\), \(\sqrt{n} \left\| {{{ U}_n}(\beta _0 ) - {W_n}(\beta _0 )} \right\| \xrightarrow {p} 0\), that is,

$$\begin{aligned} {U_n}(\beta _0 ) = {W_n}(\beta _0 ) + {o_p}({n^{ - 1/2}}). \end{aligned}$$
(A.1)

Then, \(E{U_n}(\beta _0 ) = E{W_n}(\beta _0 ) + E\left[ {{o_p}({n^{ - 1/2}})} \right] \). Hence, we can have \(E{U_n}({\beta _0}) \rightarrow 0\) as \(n \rightarrow \infty \).

Before proving Theorem 2.1, we display similar notations like Li et al. (2016). Define

$$\begin{aligned} \left\{ \begin{array}{l} {{{\hat{V}}}_i}(\beta ) = n{W_n}(\beta ) - (n - 1)W_{n - 1}^{( - i)}(\beta ),\,i = 1,\ldots ,n,\\ {W_n}(\beta ) = \frac{1}{n}\sum \limits _{i = 1}^n {{{{\hat{V}}}_i}(\beta )},\\ G(\beta ) = \frac{1}{n}\sum \limits _{i = 1}^n {{{{\hat{V}}}_i}(\beta ){\hat{V}}_i^T(\beta )}, \\ G^*(\beta ) = \frac{1}{n}\sum \limits _{i = 1}^n {{{{\hat{Q}}}_i}(\beta ){\hat{Q}}_i^T(\beta )}, \\ \phi (z,\beta ) = {({\phi _1}(z,\beta ),\ldots ,{\phi _{_p}}(z,\beta ))^T} = EH(z,{Z_1};\beta ),\\ \psi (x,y,\beta ) = H(x,y;\beta ) - \phi (x,\beta ) - \phi (y,\beta ), \\ g(z,\beta ) = {({g_1}(z,\beta ),\ldots ,{g_{_p}}(z,\beta ))^T} = 2\phi (z,\beta ),\\ \sigma _l^2(\beta ) = Var({\phi _l}({Z_1},\beta )),l = 1,\ldots ,p,\\ \sigma _{st}^2(\beta ) = Cov({\phi _s}({Z_1},\beta ),{\phi _t}({Z_1},\beta )),s,t = 1,\ldots ,p,\\ \Sigma _{p \times p}^{(\beta )}:\mathrm{the\ asymptotic\ variance - covariance\ matrix\ of} \sqrt{n} {W_n}(\beta ), \\ \mathrm{with}\,\mathrm{elements}\,4{\sigma _{st}}(\beta ),\,s,\,t = 1,\ldots ,p. \end{array} \right. \end{aligned}$$

Under conditions (C.1)–(C.3), following Jing et al. (2009) and Li et al. (2016), we will prove Lemmas A.1 to  A.5.

Lemma A.1

Under conditions (C.1)–(C.3), as \(n \rightarrow \infty \), one has

$$\begin{aligned} \sqrt{n} {U_n}(\beta _0 ) \xrightarrow {d} N(0,\Sigma _{p \times p}^{(\beta _0 )}). \end{aligned}$$

Proof

From Li et al. (2016), we can conclude that \(\sqrt{n} {{W}_n}(\beta _0 )\) tends to have a normal distribution with mean 0 and covariance \(\Sigma _{p \times p}^{(\beta _0 )}\). Then, as \(n \rightarrow \infty \), by (A.1), we can derive that

$$\begin{aligned} \begin{aligned} E[\sqrt{n} {U_n}({\beta _0})]&= E[\sqrt{n} ({W_n}({\beta _0}) + {o_p}({n^{ - 1/2}}))]\\&= E[\sqrt{n} {W_n}({\beta _0}) + {o_p}(1)] = E[\sqrt{n} {W_n}({\beta _0})] + {o_p}(1) \rightarrow 0, \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \mathrm{cov}\left( \sqrt{n} {U_n}(\beta _0 )\right)&= \mathrm{cov}\left( \sqrt{n} \left( {W_n}(\beta _0 ) + {o_p}\left( {n^{ - 1/2}}\right) \right) \right) = \mathrm{cov}\left( \sqrt{n} {W_n}(\beta _0 ) + {o_p}(1)\right) \\&= \mathrm{cov}\left( \sqrt{n} {W_n}(\beta _0 )\right) + 2\mathrm{cov}\left( \sqrt{n} {W_n}(\beta _0 ),{o_p}(1)\right) + {\mathrm{cov}} ({o_p}(1))\\&= \mathrm{cov}\left( \sqrt{n} {W_n}(\beta _0 )\right) + 2\left[ E\left( \sqrt{n} {W_n}(\beta _0 ) \times {o_p}(1)\right) - E\left( \sqrt{n} {W_n}(\beta _0 )\right) \right. \\&\quad \left. \times E({o_p}(1)) \right] + {o_p}(1) = \Sigma _{p \times p}^{(\beta _0 )} + {o_p}(1) \rightarrow \Sigma _{p \times p}^{(\beta _0 )}. \end{aligned}$$

Thus, Lemma A.1 holds. \(\square \)

Lemma A.2

Under conditions (C.1)–(C.3), with probability tending to one as \(n \rightarrow \infty \), the zero vector is contained in the interior of the convex hull of \(\left\{ {{{{\hat{Q}}}_1}(\beta _0 ),\ldots ,{{\hat{Q}}_n}(\beta _0 )} \right\} \).

Proof

Combining the Hoeffding decomposition, the proof of Lemma A.2 in Owen (1990) and Li et al. (2016), we complete the proof of Lemma A.2. \(\square \)

Lemma A.3

Under conditions (C.1)–(C.3), one has \({G^*}(\beta _0 ) = \Sigma _{p \times p}^{({\beta _0})} + o(1)\), a.s.

Proof

Combining Lemma A.1 in Li et al. (2016) and strong law of large numbers for U-statistics, we get \({W_n}(\beta _0)=o(1) \ a.s\). For \(l=1,\ldots ,r\), let \(\sigma _{H,l}^2(\beta _0) = Var({H_l}({Z_1},{Z_2};{\beta _0}))\). Since \(E[H_l^2({Z_1},{Z_2};\beta _0 )] < \infty \), \(\sigma _{H,l}^2(\beta _0)< \infty \). As a result,

$$\begin{aligned} \begin{aligned} G(\beta _0 )&= \frac{1}{n}\sum \limits _{i = 1}^n {{{\hat{V}}_i}(\beta _0 ){\hat{V}}_i^T(\beta _0 )} \\&= \frac{1}{n}\sum \limits _{i = 1}^n {\left[ {{{{\hat{V}}}_i}(\beta _0 ) - {W_n}(\beta _0 ) + {W_n}(\beta _0 )} \right] {{\left[ {{{\hat{V}}_i}(\beta _0 ) - {W_n}(\beta _0 ) + {W_n}(\beta _0 )} \right] }^T}} \\&= \frac{1}{n}\sum \limits _{i = 1}^n {\left[ {{{{\hat{V}}}_i}(\beta _0 ) - {W_n}(\beta _0 )} \right] {{\left[ {{{{\hat{V}}}_i}(\beta _0 ) - {W_n}(\beta _0 )} \right] }^T}} + {W_n}(\beta _0 )W_n^T(\beta _0 )\\&= \frac{1}{n}\sum \limits _{i = 1}^n \left[ {n{W_n}(\beta _0 ) - (n - 1)W_{n - 1}^{( - i)}(\beta _0 ) - {W_n}(\beta _0 )} \right] \left[ n{W_n}(\beta _0 ) \right. \\&\quad \left. - (n - 1)W_{n - 1}^{( - i)}(\beta _0 ) - {W_n}(\beta _0 ) \right] ^T + {W_n}(\beta _0 )W_n^T(\beta _0 )\\&= \frac{{{{(n - 1)}^2}}}{n}\sum \limits _{i = 1}^n {\left[ {{W_n}(\beta _0 ) - W_{n - 1}^{( - i)}(\beta _0 )} \right] {{\left[ {{W_n}(\beta _0 ) - W_{n - 1}^{( - i)}(\beta _0 )} \right] }^T}} + o(1)\;a.s. \end{aligned} \end{aligned}$$
(A.2)

From Lemma A.3 in Li et al. (2016), we have that

$$\begin{aligned} G(\beta _0 ) = \Sigma _{p \times p}^{({\beta _0})} + o(1)\;a.s. \end{aligned}$$

Also, since

$$\begin{aligned} \begin{aligned} {W_n}(\beta _0 )&= \frac{1}{n}\sum \limits _{i = 1}^n {{{\hat{V}}_n}(\beta _0 )} = \frac{1}{n}\sum \limits _{i = 1}^n {\left[ {n{W_n}(\beta _0 ) - (n - 1)W_{n - 1}^{( - i)}(\beta _0 )} \right] } \\&= n{W_n}(\beta _0 ) - \frac{{n - 1}}{n}\sum \limits _{i = 1}^n {W_{n - 1}^{( - i)}(\beta _0 )}, \end{aligned} \end{aligned}$$
(A.3)

it leads to

$$\begin{aligned} \sum \limits _{i = 1}^n {W_{n - 1}^{( - i)}(\beta _0 )} = n{W_n}(\beta _0 ). \end{aligned}$$
(A.4)

Furthermore,

$$\begin{aligned} \begin{aligned} {G^*}(\beta _0 )&= \frac{1}{n}\sum \limits _{i = 1}^n {{{\hat{Q}}_i}(\beta _0 )} {\hat{Q}}_i^T(\beta _0 )\\&= \frac{1}{n}\sum \limits _{i = 1}^n {\left[ {{{{\hat{Q}}}_i}(\beta _0 ) - {U_n}(\beta _0 ) + {U_n}(\beta _0 )} \right] {{\left[ {{{\hat{Q}}_i}(\beta _0 ) - {U_n}(\beta _0 ) + {U_n}(\beta _0 )} \right] }^T}} \\&= \frac{1}{n}\sum \limits _{i = 1}^n {\left[ {{{{\hat{Q}}}_i}(\beta _0 ) - {U_n}(\beta _0 )} \right] {{\left[ {{{{\hat{Q}}}_i}(\beta _0 ) - {U_n}(\beta _0 )} \right] }^T}} + {U_n}(\beta _0 )U_n^T(\beta _0 ). \end{aligned} \end{aligned}$$
(A.5)

Note that the first term \(\sum \nolimits _{i = 1}^n {{{\hat{Q}}_i}(\beta _0 )} {\hat{Q}}_i^T(\beta _0 )/n\) in Eq. (A.5) can be proved as \(\Sigma _{p \times p}^{({\beta _0})} + o(1)\;a.s.\)

$$\begin{aligned}&\frac{1}{n}\sum \limits _{i = 1}^n {\left[ {{{\hat{Q}}_i}(\beta _0 ) - {U_n}(\beta _0 )} \right] {{\left[ {{{\hat{Q}}_i}(\beta _0 ) - {U_n}(\beta _0 )} \right] }^T}} \\&\quad = \frac{1}{n}\sum \limits _{i = 1}^n {\left[ {n{U_n}(\beta _0 ) - (n - 1)U_{n - 1}^{( - i)}(\beta _0 ) - {U_n}(\beta _0 )} \right] {{\left[ {n{U_n}(\beta _0 ) - (n - 1)U_{n - 1}^{( - i)}(\beta _0 ) - {U_n}(\beta _0 )} \right] }^T}} \\&\quad = \frac{{{{(n - 1)}^2}}}{n}\sum \limits _{i = 1}^n {\left[ {{U_n}(\beta _0 ) - U_{n - 1}^{( - i)}(\beta _0 )} \right] {{\left[ {{U_n}(\beta _0 ) - U_{n - 1}^{( - i)}(\beta _0 )} \right] }^T}} \\&\quad = \frac{{{{(n - 1)}^2}}}{n}\sum \limits _{i = 1}^n {\left[ {({W_n}(\beta _0 ) - W_{n - 1}^{( - i)}(\beta _0 )) + o({n^{ - 1/2}}))} \right] {{\left[ {({W_n}(\beta _0 ) - W_{n - 1}^{( - i)}(\beta _0 )) + o({n^{ - 1/2}}))} \right] }^T}} \\&\quad = \frac{{{{(n - 1)}^2}}}{n}\sum \limits _{i = 1}^n {\left[ {{W_n}(\beta _0 ) - W_{n - 1}^{( - i)}(\beta _0 )} \right] {{\left[ {{W_n}(\beta _0 ) - W_{n - 1}^{( - i)}(\beta _0 )} \right] }^T}} +o(1)\\&\qquad +\, \frac{{2{{(n - 1)}^2}}}{n}\sum \limits _{i = 1}^n {o({n^{ - 1/2}})\left[ {{W_n}(\beta _0 ) - W_{n - 1}^{( - i)}(\beta _0 )} \right] } \\&\quad = \frac{{{{(n - 1)}^2}}}{n}\sum \limits _{i = 1}^n {\left[ {{W_n}(\beta _0 ) - W_{n - 1}^{( - i)}(\beta _0 )} \right] {{\left[ {{W_n}(\beta _0 ) - W_{n - 1}^{( - i)}(\beta _0 )} \right] }^T}} + o(1)\\&\qquad +\, \frac{{2{{(n - 1)}^2}}}{n}o({n^{ - 1/2}})\left( {n{W_n}(\beta _0 ) - \sum \limits _{i = 1}^n {W_{n - 1}^{( - i)}(\beta _0 )} } \right) \\&\quad = G(\beta _0 ) + o(1)\\&\quad = \Sigma _{p \times p}^{({\beta _0})} + o(1)\;a.s. \end{aligned}$$

Also, based on the strong law of large number for U-statistics, one has \({U_n}(\beta _0 ) = o(1)\) a.s. Therefore, \(G^* (\beta _0)= \Sigma _{p \times p}^{({\beta _0})} + o(1)\;a.s\). \(\square \)

Lemma A.4

Let \({A_n} = {\max _{1 \le i \ne j \le n}}\left\| {K({Z_1},{Z_2};\beta _0 )} \right\| \). Under the condition (C.1), we have \({A_n} = o({n^{1/2}})\) a.s.

Proof

By Borel–Cantelli lemma and Li et al. (2016), \({A_n} = o({n^{1/2}})\) a.s. \(\square \)

Lemma A.5

Let \({B_n} = {\max _{1 \le i \le n}}\left\| {{{{\hat{Q}}}_i}(\beta _0 )} \right\| \). Under conditions (C.1)–(C.3), \({B_n} = o({n^{1/2}})\) and \({n^{ - 1}}{\sum \nolimits _{i = 1}^n {\left\| {{{{\hat{Q}}}_i}(\beta _0 )} \right\| } ^3} = o({n^{1/2}})\).

Proof

We can check that

$$\begin{aligned} \begin{aligned} {U_n}(\beta _0 )&= \frac{1}{{n(n - 1)}}\sum \limits _{l = 1}^n {\sum \limits _{j = 1,j \ne l}^n {K({Z_l},{Z_j};\beta _0 )} } \\&= \frac{2}{{n(n - 1)}}\sum \limits _{j = 1,j \ne i}^n {K({Z_i},{Z_j};\beta _0 )} + \frac{{n - 2}}{n}U_{n - 1}^{( - i)}(\beta _0 ). \end{aligned} \end{aligned}$$

Then, for any \(1 \le i \le n\),

$$\begin{aligned} \begin{aligned} \left\| {{{{\hat{Q}}}_i}(\beta _0 )} \right\|&= \left\| {\frac{2}{{n - 1}}\sum \limits _{j = 1,j \ne i}^n {K({Z_i},{Z_j};\beta _0 )} - U_{n - 1}^{( - i)}(\beta _0 )} \right\| \\&\le 3{\max _{1 \le i \ne j \le n}}\left| {K({Z_i},{Z_j};\beta _0 )} \right| \\&= 3{A_n}. \end{aligned} \end{aligned}$$
(A.6)

Combining (A.6) and the result of Lemma A.4, that is, \({A_n} = o({n^{1/2}})\) a.s., we prove Lemma A.5. \(\square \)

Proof of Theorem 2.1

We let \(\lambda =\rho \theta \), where \(\rho \ge 0\) and \(\left\| \theta \right\| = 1\). Let \(e_j\) be the unit vector on the jth coordinate direction. According to (2.5), we obtain that like Owen (2001) and Lu and Liang (2006),

$$\begin{aligned} \begin{aligned} 0&\ge \left| {{\theta ^ T}f(\rho \theta )} \right| \\&\ge \, \frac{{\rho {\theta ^T}{G^*}(\beta _0)\theta }}{{1 + \rho {B_n}}} - \frac{1}{n}\left| {\sum \limits _{j = 1}^p {e_j^T} \sum \limits _{i = 1}^n {{{{\hat{Q}}}_i}({\beta _0})} } \right| . \end{aligned} \end{aligned}$$

We also have \({G^*}(\beta _0 ) = \Sigma _{p \times p}^{({\beta _0})} + o(1)\) a.s. from Lemma A.3. Thus, one has

$$\begin{aligned} \left\| \lambda \right\| = \rho = {O_p}({n^{ - 1/2}}). \end{aligned}$$
(A.7)

Denote \({\eta _i} = {\lambda ^T}{{{\hat{Q}}}_i}({\beta _0})\). From Lemma A.5 and (A.7), we can obtain

$$\begin{aligned} \lambda = {({G^*}(\beta _0))^{ - 1}}{U_n}({\beta _0}) + \gamma , \end{aligned}$$

where \(\left\| \gamma \right\| = {o_p}({n^{ - 1/2}})\). By Taylor expansion, we obtain that

$$\begin{aligned} \begin{aligned} {l}({\beta _0})&= 2 \sum \limits _{i = 1}^n {{\eta _i} - \sum \limits _{i = 1}^n {\eta _i^2} } +{o_p}(1) \\&= nU_n^T({\beta _0}){({G^*}(\beta _0))^{ - 1}}{U_n}({\beta _0}) - n{\gamma ^T}{G^*}(\beta _0)\gamma +{o_p}(1). \end{aligned} \end{aligned}$$
(A.8)

In (A.8), the first term is \(nU_n^T(\beta _0 ){(G^*(\beta _0))^{ - 1}}{U_n}(\beta _0 )\mathop \rightarrow \limits ^d \chi _p^2\). Moreover, the second term is \(n{\gamma ^T}{G^*}(\beta _0)\gamma = n{o_p}({n^{ - 1/2}}){O_p}(1){o_p}({n^{ - 1/2}}) = {o_p}(1)\). Therefore, \( - 2\log R({\beta _0})\mathop \rightarrow \limits ^d \chi _p^2\). \(\square \)

Proof of Theorem 2.2

We follow the similar arguments in Yu et al. (2011) and Yang and Zhao (2012). Corresponding to \(\beta _0=(\beta _{10}^T, \beta _{20}^T)^T\), we denote \(Z=(Z_{1}^T, Z_{2}^T)^T\). Recall that \(\sqrt{n} (\hat{\beta }- {\beta _0})\) was asymptotically normally distributed with mean zero and variance-covariance matrix \(D_n{({\beta _0})^{ - 1}}B_n({\beta _0}){(D_n{({\beta _0})^{ - 1}})^T}\).

Define

$$\begin{aligned} {\bar{D}}({\beta _0}) = \mathop {\lim }\limits _{n \rightarrow \infty } E{\left[ {{{\partial {S_n}} / {\partial {\beta _2}}}} \right] _{{\beta _0}}}. \end{aligned}$$

Since \(D_n(\beta _0)\) is positive definite, \({\bar{D}}(\beta _0)\) is of rank \(p-q\). Denote

$$\begin{aligned} {{\hat{\beta }}_2} = \arg \mathop {\inf }\limits _{{\beta _2}} l\left[ {{{\left( \beta _{10}^T,\beta _{2}^T\right) }^T}} \right] . \end{aligned}$$

Similar to Qin and Lawless (1994), we can show that

$$\begin{aligned} \sqrt{n} \left( {{\hat{\beta }}_2} - {\beta _{20}}\right)= & {} - {\left( {\bar{D}}{\left( {\beta _0}\right) ^T}{\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) ^{ - 1}}{\bar{D}}\left( {\beta _0}\right) \right) ^{ - 1}}\\&\times {\bar{D}}{\left( {\beta _0}\right) ^T}{\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) ^{ - 1}}\frac{1}{{\sqrt{n} }}\sum \limits _{i = 1}^n {{{\hat{Q}}_i}\left( {\beta _0}\right) } + {o_p}(1), \end{aligned}$$

and

$$\begin{aligned}&\sqrt{n} {\lambda _2} = \left( {I - {{\left( \Sigma _{p \times p}^{({\beta _0})}\right) }^{ - 1}}{\bar{D}}({\beta _0}){{\left( {\bar{D}}{{\left( {\beta _0}\right) }^T}{{\left( \Sigma _{p \times p}^{({\beta _0})}\right) }^{ - 1}}\bar{D}({\beta _0})\right) }^{ - 1}}{\bar{D}}{{({\beta _0})}^T}} \right) \\&\quad \left( \Sigma _{p \times p}^{({\beta _0})}\right) ^{ - 1}\frac{1}{{\sqrt{n} }}\sum \limits _{i = 1}^n {{{{\hat{Q}}}_i}({\beta _0})} + {o_p}(1), \end{aligned}$$

where \(\lambda _2\) is the corresponding Lagrange multiplier. Recall that

$$\begin{aligned} {U_n}(\beta ) = \frac{1}{n}\sum \limits _{i = 1}^n {{{{\hat{Q}}}_i}} (\beta ). \end{aligned}$$

Hence, by Taylor’s expansion, one has that

$$\begin{aligned} {l^*}({\beta _{10}})&= {\left( {\frac{1}{{\sqrt{n} }}\sum \limits _{i = 1}^n {{{{\hat{Q}}}_i}({\beta _0})} } \right) ^T}\left( {{\left( \Sigma _{p \times p}^{({\beta _0})}\right) }^{ - 1}}\right. \\&\quad \left. -\, {{\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) }^{ - 1}}{\bar{D}}\left( {\beta _0}\right) {{\left( {\bar{D}}{{\left( {\beta _0}\right) }^T}{{\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) }^{ - 1}}\bar{D}\left( {\beta _0}\right) \right) }^{ - 1}}{\bar{D}}{{\left( {\beta _0}\right) }^T}{{\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) }^{ - 1}} \right) \\&\quad \times \left( {\frac{1}{{\sqrt{n} }}\sum \limits _{i = 1}^n {{{\hat{Q}}_i}\left( {\beta _0}\right) } } \right) + {o_p}\left( 1\right) \\&= {\left( {{{\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) }^{ - 1/2}}\frac{1}{{\sqrt{n} }}\sum \limits _{i = 1}^n {{{{\hat{Q}}}_i}\left( {\beta _0}\right) } } \right) ^T}\Psi \left( {{{\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) }^{ - 1/2}}\frac{1}{{\sqrt{n} }}\sum \limits _{i = 1}^n {{{\hat{Q}}_i}\left( {\beta _0}\right) } } \right) + {o_p}\left( 1\right) \\&= {\left( {{{\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) }^{ - 1/2}}\sqrt{n} {U_n}\left( {\beta _0}\right) } \right) ^T}\Psi \left( {{{\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) }^{ - 1/2}}\sqrt{n} {U_n}\left( {\beta _0}\right) } \right) + {o_p}\left( 1\right) , \end{aligned}$$

where

$$\begin{aligned} \Psi = I - {\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) ^{ - 1/2}}{\bar{D}}\left( {\beta _0}\right) {\left( {\bar{D}}{\left( {\beta _0}\right) ^T}{\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) ^{ - 1}}{\bar{D}}\left( {\beta _0}\right) \right) ^{ - 1}}{\bar{D}}{\left( {\beta _0}\right) ^T}{\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) ^{ - 1/2}}. \end{aligned}$$

\(\Psi \) is a symmetric matrix with trace q. By Lemma A.1,

$$\begin{aligned} {\left( \Sigma _{p \times p}^{\left( {\beta _0}\right) }\right) ^{ - 1/2}}\sqrt{n} {U_n}\left( {\beta _0}\right) \xrightarrow {d} N\left( 0,{I_{p \times p}}\right) . \end{aligned}$$

Then, we have that

$$\begin{aligned} - 2\log R^*\left( \beta _{10} \right) \mathop \rightarrow \limits ^d \chi _q^2. \end{aligned}$$

The proof of Theorem 2.2 is completed. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, X., Zhao, Y. Jackknife empirical likelihood inference for the accelerated failure time model. TEST 28, 269–288 (2019). https://doi.org/10.1007/s11749-018-0601-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11749-018-0601-7

Keywords

Mathematics Subject Classification

Navigation