Skip to main content
Log in

Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments

  • Published:
Lifetime Data Analysis Aims and scope Submit manuscript

Abstract

Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Breslow NE, Day NE (1980) Statistical methods in cancer research, vol 1., The design and analysis of case–control studies IARC, Lyon

  • Buzas JS (1998) Unbiased scores in proportional hazards regression with covariate measurement error. J Stat Plan Inference 67:247–257

    Article  MathSciNet  MATH  Google Scholar 

  • Carroll RJ, Ruppert D, Stefanski LA, Crainiceanu CM (2006) Measurement error in nonlinear models: a modern perspective, 2nd edn. Chapman & Hall/CRC, Boca Raton

    Book  MATH  Google Scholar 

  • Cox DR (1972) Regression models and life-tables (with Discussion). J R Stat Soc Ser B 34:187–220

    MATH  Google Scholar 

  • Cox DR, Oakes D (1984) Analysis of survival data. Chapman & Hall/CRC, Boca Raton

    Google Scholar 

  • Fuchs HJ, Borowitz DS, Christiansen DH, Morris EM, Nash ML, Ramsey BW, Rosenstein BJ, Smith AL, Wohl ME (1994) Effect of aerosolized recombinant human DNase on exacerbations of respiratory symptoms and on pulmonary function in patients with cystic fibrosis. N Engl J Med 331:637–642

    Article  Google Scholar 

  • Horn RA, Johnson CR (1985) Matrix analysis. Cambridge University Press, New York

    Book  MATH  Google Scholar 

  • Hu C, Lin DY (2004) Semiparametric failure time regression with replicates of mismeasured covariates. J Am Stat Assoc 99:105–118

    Article  MathSciNet  MATH  Google Scholar 

  • Hu P, Tsiatis AA, Davidian M (1998) Estimating the parameters in the Cox model when covariates are measured with error. Biometrics 54:1407–1419

    Article  MathSciNet  MATH  Google Scholar 

  • Huang Y, Wang CY (2000) Cox regression with accurate covariates unascertainable: a nonparametric-correction approach. J Am Stat Assoc 45:1209–1219

    Article  MathSciNet  MATH  Google Scholar 

  • Jiang J, Zhou H (2007) Additive hazard regression with auxiliary covariates. Biometrika 94:359–369

    Article  MathSciNet  MATH  Google Scholar 

  • Kalbfleisch JD, Prentice RL (2002) The statistical analysis of failure time data, 2nd edn. Wiley, Hoboken

    Book  MATH  Google Scholar 

  • Kulich M, Lin DY (2000) Additive hazards regression with covariate measurement error. J Am Stat Assoc 95:238–248

    Article  MathSciNet  MATH  Google Scholar 

  • Li Y, Lin X (2003) Functional inference in frailty measurement error models for clustered survival data using the SIMEX approach. J Am Stat Assoc 98:191–203

    Article  MathSciNet  MATH  Google Scholar 

  • Li Y, Ryan L (2004) Survival analysis with heterogeneous covariate measurement error. J Am Stat Assoc 99:724–735

    Article  MathSciNet  MATH  Google Scholar 

  • Lin DY, Ying Z (1994) Semiparametric analysis of the additive risk model. Biometrika 81:61–71

    Article  MathSciNet  MATH  Google Scholar 

  • Nakamura T (1992) Proportional hazards model with covariates subject to measurement error. Biometrics 48:829–838

    Article  MathSciNet  Google Scholar 

  • Pollard D (1990) Empirical processes: theory and applications. IMS, Hayward

    MATH  Google Scholar 

  • Prentice RL (1982) Covariate measurement errors and parameter estimation in a failure time regression model. Biometrika 69:331–342

    Article  MathSciNet  MATH  Google Scholar 

  • Song X, Huang Y (2005) On corrected score approach for proportional hazards model with covariate measurement error. Biometrics 61:702–714

    Article  MathSciNet  MATH  Google Scholar 

  • Sun L, Zhang Z, Sun J (2006) Additive hazards regression of failure time data with covariate measurement errors. Stat Neerlandica 60:497–509

    Article  MathSciNet  MATH  Google Scholar 

  • van der Vaart AW (1998) Asymptotic statistics. Cambridge University Press, New York

    Book  MATH  Google Scholar 

  • Wang CY, Hsu L, Feng ZD, Prentice RL (1997) Regression calibration in failure time regression. Biometrics 53:131–145

    Article  MathSciNet  MATH  Google Scholar 

  • Yan Y, Yi GY (2015) A class of functional methods for error-contaminated survival data under additive hazards models with replicate measurements. J Am Stat Assoc. doi:10.1080/01621459.2015.1034317

  • Yi GY, Lawless JF (2007) A corrected likelihood method for the proportional hazards model with covariates subject to measurement error. J Stat Plan Inference 137:1816–1828

    Article  MathSciNet  MATH  Google Scholar 

  • Yi GY, Reid N (2010) A note on Mis-specified estimating functions. Stat Sinica 20:1749–1769

    MathSciNet  MATH  Google Scholar 

  • Zucker DM, Spiegelman D (2008) Corrected score estimation in the proportional hazards model with misclassified discrete covariates. Stat Med 27:1911–1933

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Grace Y. Yi.

Appendix

Appendix

In the following, for an \(m\times 1\) vector \(a=(a_{(1)},a_{(2)},\cdots , a_{(m)})^T\), we use the Euclidean norm \(||a||=(\sum a_{(i)}^2)^{1/2}\). For a matrix A, define \(||A||=\sup _{i,j}|A_{(i)(j)}|\). When we say that the vector process \(A_n(t)\) converges almost surely to A(t) uniformly in t, we mean that \(\sup _{0\le t\le \tau }||A_n(t)-A(t)||\mathop {\rightarrow }\limits ^{a.s.} 0\), as \(n\rightarrow \infty \).

1.1 Appendix 1: regularity conditions

  1. R1.

    \(\{N_i(\cdot ),Y_i(\cdot ),Z_i(\cdot )\}, i=1,\cdots ,n\) are independent and identically distributed.

  2. R2.

    \(\Pr \{Y_1(\tau )=1\}>0\).

  3. R3.

    \(\Lambda _0(\tau )<\infty \).

  4. R4.

    \(\sup _{t\in [0,\tau ]}||E[Z_1^{\otimes 2}(t)]||<\infty \).

  5. R5.

    Bounded variation condition: for \(i=1,\cdots , n\), \(j=1,\cdots , p+q\),

    $$\begin{aligned} |Z_{i(j)}(0)|+\int _0^{\tau } |dZ_{i(j)}(u)|\le K \end{aligned}$$

    holds almost surely for all the sample path, where K is a constant.

  6. R6.

    All the \(n_i\), \(i=1,\cdots , n\), are upper bounded by a constant \(N_0\), and

    $$\begin{aligned} \lim _{n\rightarrow \infty }n^{-1}\sum _{i=1}^n I\{n_i=j\} \end{aligned}$$

    exists, where \(j=1,\cdots ,N_0\).

  7. R7.

    \(||E(\epsilon _{11}^{\otimes 2})||<\infty \).

  8. R8.

    \(\int _0^{\tau }E\left[ Y_1(t)\{{Z}_1(t)-e(t)\}^{\otimes 2}\right] dt\), \(\Sigma _{nv}\), and \(\Sigma _{rc}^*\) are positive definite.

1.2 Appendix 2: Proof of Theorem 1

Since each component of the vector \(n^{-1}\sum _{i=1}^n Y_i(t)Z_i(t)\) is of bounded variation by Condition R5, it can be written as the difference of two nondecreasing functions. Thus, \(n^{-1}\sum _{i=1}^n Y_i(t)Z_i(t)\) is manageable (Pollard 1990, p. 38), and \(n^{-1}\sum _{i=1}^n Y_i(t)\hat{Z}_i(t)\) is manageable. Together with Conditions R4, R6 and R7, the two conditions of the uniform strong law of large numbers (USLLN) (Pollard 1990, p. 41) are thus verified, and we obtain that as \(n\rightarrow \infty \), \(n^{-1}\sum _{i=1}^n Y_i(t)\hat{Z}_i(t)\mathop {\rightarrow }\limits ^{a.s.} E[Y_i(t)Z_i(t)]\) uniformly in t. Similarly, \(n^{-1}\sum _{i=1}^n Y_i(t)\mathop {\rightarrow }\limits ^{a.s.} E[Y_i(t)]\), and \(n^{-1}\sum _{i=1}^n N_i(t)\mathop {\rightarrow }\limits ^{a.s.} E[N_i(t)]\) uniformly in t. By USLLN together with Condition R2, \(\sum _{i=1}^n Y_i(t)\hat{Z}_i(t)/\sum _{i=1}^n Y_i(t)\mathop {\rightarrow }\limits ^{a.s.} e(t)\) uniformly in t. Thus, \(n^{-1}\sum _{i=1}^n\int _0^{\tau } \tilde{Z}(t)dN_i(t)\mathop {\rightarrow }\limits ^{a.s.}\int _0^{\tau }e(t)dE[N_i(t)]\). Similarly,

\(n^{-1}\sum _{i=1}^n\int _0^{\tau } \hat{Z}_i(t)dN_i(t)\mathop {\rightarrow }\limits ^{a.s.}\int _0^{\tau }E\left[ Z_i(t)dN_i(t)\right] .\) Note that \(E[\hat{Z}_i^{\otimes 2}(t)]=E[Z_i^{\otimes 2}(t)]+\Sigma _1/n_i\). Thus,

$$\begin{aligned} n^{-1}\sum _{i=1}^n\int _0^{\tau }Y_i(t)\hat{Z}_i^{\otimes 2}(t)\beta dt\mathop {\longrightarrow }\limits ^{a.s.} \int _0^{\tau }E\left[ Y_i(t){Z}_i^{\otimes 2}(t)\beta \right] dt+\rho _0\Sigma _1\beta \int _0^{\tau }E\left[ Y_i(t)\right] dt. \end{aligned}$$

After some algebra, these results lead to that \(n^{-1} U_{nv}(\beta )\mathop {\longrightarrow }\limits ^{a.s.}\mathcal {U}_{nv}(\beta )\), as \(n\rightarrow \infty \). Thus, \(\hat{\beta }_{nv}\mathop {\rightarrow }\limits ^{a.s.}\beta _{nv}^*\), as \(n\rightarrow \infty \).

Now we show that

$$\begin{aligned} n^{-1/2}U_{nv}(\beta _{nv}^*)=&\,n^{-1/2}\sum _{i=1}^n\biggl [\int _0^{\tau }\left\{ \hat{Z}_i(t)-e(t)\right\} d \tilde{M}_i(t;\beta )\nonumber \\&- \int _0^{\tau }Y_i(t)\left\{ \hat{Z}_i(t)-e(t)\right\} ^{\otimes 2}\left( \beta _{nv}^*-\beta \right) dt\biggr ]+o_p(1). \end{aligned}$$
(13)

Note that \(U_{nv}(\beta _{nv}^*)=U_1^*-U_2^*\), where

$$\begin{aligned} U_1^*= & {} \sum _{i=1}^n\int _0^{\tau }\left\{ \hat{Z_i}(t)-\tilde{Z}(t)\right\} d\tilde{M}_i(t;\beta ),\\ \text {and}\ \ U_2^*= & {} \sum _{i=1}^n\int _0^{\tau }\left\{ \hat{Z_i}(t)-\tilde{Z}(t)\right\} Y_i(t)\hat{Z}_i^T(t)\left( \beta _{nv}^*-\beta \right) dt. \end{aligned}$$

Similar to the proof of Theorem 1 of Kulich and Lin (2000), we obtain that

$$\begin{aligned} n^{-1/2}U_1^*=n^{-1/2}\sum _{i=1}^n\int _0^{\tau }\left\{ \hat{Z_i}(t)-e(t)\right\} d\tilde{M}_i(t;\beta )+o_p(1). \end{aligned}$$

To prove (13), it remains to prove that

$$\begin{aligned} n^{-1/2}U_2^*\equiv & {} n^{-1/2}\sum _{i=1}^n\int _0^{\tau }Y_i(t)\left\{ \hat{Z}_i(t)-\tilde{Z}(t)\right\} ^{\otimes 2}\left( \beta _{nv}^*-\beta \right) dt\nonumber \\= & {} n^{-1/2}\sum _{i=1}^n\int _0^{\tau }Y_i(t)\left\{ \hat{Z}_i(t)-e(t)\right\} ^{\otimes 2}\left( \beta _{nv}^*-\beta \right) dt+o_p(1). \end{aligned}$$
(14)

Observe that

$$\begin{aligned}&n^{-1/2}\int _0^{\tau }\sum _{i=1}^nY_i(t)\tilde{Z}^{\otimes 2}(t)dt\nonumber \\= & {} n^{-1/2}\int _0^{\tau }\left\{ \tilde{Z}(t)-e(t)\right\} \sum _{i=1}^nY_i(t)\hat{Z}_i^T(t)dt+n^{-1/2}\int _0^{\tau }e(t) \sum _{i=1}^nY_i(t)\hat{Z}_i^T(t)dt\nonumber \\= & {} n^{1/2}\int _0^{\tau }\left\{ \tilde{Z}(t)-e(t)\right\} E\left\{ Y_i(t)\hat{Z}_i^T(t)\right\} dt+n^{-1/2}\int _0^{\tau }e(t) \sum _{i=1}^nY_i(t)\hat{Z}_i^T(t)dt+o_p(1)\nonumber \\= & {} n^{-1/2}\int _0^{\tau }\frac{\sum _{i=1}^nY_i(t)\hat{Z}_i(t)}{E\{Y_i(t)\}} E\left\{ Y_i(t)\hat{Z}_i^T(t)\right\} dt\nonumber \\&-\int _0^{\tau }\frac{E\left\{ Y_i(t)\hat{Z}_i(t)\right\} }{\left[ E\{Y_i(t)\}\right] ^2}{E\left\{ Y_i(t)\hat{Z}_i^T(t)\right\} }\sqrt{n} \left[ \frac{1}{n}\sum _{i=1}^nY_i(t)-E\{Y_i(t)\}\right] dt\nonumber \\&-\,n^{1/2}\int _0^{\tau }e(t) E\left\{ Y_i(t)\hat{Z}_i^T(t)\right\} dt+n^{-1/2}\int _0^{\tau }e(t) \sum _{i=1}^nY_i(t)\hat{Z}_i^T(t)dt+o_p(1)\nonumber \\= & {} n^{-1/2}\int _0^{\tau }\sum _{i=1}^nY_i(t)\left\{ \hat{Z}_i(t)e^T(t)+e(t)\hat{Z}_i^T(t)-e^{\otimes 2}(t)\right\} dt+o_p(1). \end{aligned}$$
(15)

Plugging (15) into \(n^{-1/2}U_2^*\), we obtain (14), and thus (13) is proved.

By the Taylor series expansion, \(0=n^{-1/2}U_{nv}(\hat{\beta }_{nv})= n^{-1/2}U_{nv}(\beta _{nv}^*)+ \left[ n^{-1}\frac{\partial U_{nv}(\beta )}{\partial \beta }\right] n^{1/2}(\hat{\beta }_{nv}-\beta _{nv}^*)\), leading to

$$\begin{aligned} n^{1/2}\left( \hat{\beta }_{nv}-\beta _{nv}^*\right) =-\left[ n^{-1}\frac{\partial U_{nv}(\beta )}{\partial \beta }\right] ^{-1}n^{-1/2}U_{nv}(\beta _{nv}^*). \end{aligned}$$
(16)

It is straightforward that

$$\begin{aligned} -n^{-1}\frac{\partial U_{nv}(\beta )}{\partial \beta }\mathop {\rightarrow }\limits ^{a.s.}\lim _{n\rightarrow \infty }n^{-1}\int _0^{\tau }\sum _{i=1}^nE\left[ Y_i(t)\left\{ \hat{Z}_i(t)-e(t)\right\} ^{\otimes 2}\right] dt= \mathcal {D}_{nv}. \end{aligned}$$

Let \(U_{nv,i}=\int _0^{\tau }\left\{ \hat{Z}_i(t)-e(t)\right\} d \tilde{M}_i(t;\beta )- \int _0^{\tau }Y_i(t)\left\{ \hat{Z}_i(t)-e(t)\right\} ^{\otimes 2}(\beta _{nv}^*-\beta )dt,i=1,\cdots ,n\). By Condition R6, \(E[||n^{-1/2}U_{nv,i}||^2I\{||n^{-1/2}U_{nv,i}||>\epsilon \}]\) can only take at most \(N_0\) possible values for fixed \(\epsilon >0\). Without loss of generality, suppose when \(i=1\), it achieves the maximum value. It follows from the Markov inequality that \(\Pr \{||n^{-1/2}U_{nv,1}||>\epsilon \}\le n^{-1}E[||U_{nv,1}||^2]/\epsilon ^2\rightarrow 0\) as \(n\rightarrow \infty \), and thus

$$\begin{aligned}&\sum _{i=1}^nE[||n^{-1/2}U_{nv,i}||^2I\{||n^{-1/2}U_{nv,i}||>\epsilon \}]\\&\quad \le E[||U_{nv,1}||^2I\{||n^{-1/2}U_{nv,1}||>\epsilon \}]\rightarrow 0 \ \ \text {as}\ \ n\rightarrow \infty . \end{aligned}$$

Consequently, the Lindeberg condition (van der Vaart 1998, p. 20) is verified. By the multivariate Lindeberg–Feller central limit theorem (van der Vaart 1998, p. 20), we obtain that \(n^{-1/2}U_{nv}(\beta _{nv}^*)\) is asymptotically normal with mean 0 and covariance matrix \(\Sigma _{nv}=\lim _{n\rightarrow \infty }n^{-1}\sum _{i=1}^nE( U_{nv,i})^{\otimes 2}\). It follows from (16) that \(\hat{\beta }_{nv}\) is asymptotically normal with mean \(\beta _{nv}^*\) and covariance matrix \(\mathcal {D}_{nv}^{-1}\Sigma _{nv}\mathcal {D}_{nv}^{-1}\). The matrices \(\Sigma _{nv}\) and \(\mathcal {D}_{nv}\) can be consistently estimated by their empirical counterpart.

1.3 Appendix 3: asymptotic equivalence of \(\hat{\beta }_{bc}\) and \(\hat{\beta }_{szs}\)

Note that

$$\begin{aligned} \sqrt{n}\left\{ \tilde{Z}_{r}(t)-e(t)\right\} \left\{ \tilde{Z}_{s}(t)-e(t)\right\} ^T=o_p(1). \end{aligned}$$

Thus,

$$\begin{aligned} {\sqrt{n}\hat{B}_{bc,1}}= & {} \frac{1}{\sqrt{n}}\sum _{i=1}^n\int _0^{\tau }\frac{Y_i(t)}{n_i(n_i-1)}\sum _{1\le r\ne s\le n_i}\left\{ \hat{Z}_{ir}(t)- \tilde{Z}_{r}(t)\right\} \left\{ \hat{Z}_{is}(t)-\tilde{Z}_{s}(t)\right\} ^Tdt\\= & {} \frac{1}{\sqrt{n}}\sum _{i=1}^n\int _0^{\tau }\frac{Y_i(t)}{n_i(n_i-1)}\sum _{1\le r\ne s\le n_i}\hat{Z}_{ir}(t)\hat{Z}_{is}(t)^T dt\\&-\frac{1}{\sqrt{n}}\sum _{i=1}^n\int _0^{\tau }{Y_i(t)} \tilde{Z}(t)e(t)^T dt\\&-\frac{1}{\sqrt{n}}\sum _{i=1}^n\int _0^{\tau }{Y_i(t)} e(t)\tilde{Z}(t)^T dt+\frac{1}{\sqrt{n}}\sum _{i=1}^n\int _0^{\tau }{Y_i(t)} e^{\otimes 2}(t) dt+o_p(1)\\= & {} \frac{1}{\sqrt{n}}\sum _{i=1}^n\int _0^{\tau }\frac{Y_i(t)}{n_i(n_i-1)}\sum _{1\le r\ne s\le n_i}\hat{Z}_{ir}(t)\hat{Z}_{is}(t)^T dt\\&-\frac{1}{\sqrt{n}}\int _0^{\tau }\sum _{i=1}^nY_i(t)\tilde{Z}^{\otimes 2}(t)dt+o_p(1)\\= & {} {\sqrt{n}\hat{B}_{szs,1}+o_p(1)}, \end{aligned}$$

where the third equation comes from (15). Together with error bounds for the inverse matrices (Horn and Johnson 1985), we obtain \(\sqrt{n}(\hat{B}_{bc,1}^{-1}-\hat{B}_{szs,1}^{-1})=o_p(1)\). Note also that \(\hat{B}_{bc,1}+\hat{B}_{bc,2}=\hat{B}_{szs,1}+\hat{B}_{szs,2}\). It then follows that

$$\begin{aligned} \sqrt{n}\left( \hat{\beta }_{bc}-\hat{\beta }_{szs}\right)= & {} \sqrt{n}\left\{ \hat{B}_{bc,1}^{-1}\left( \hat{B}_{bc,1}+\hat{B}_{bc,2}\right) \hat{\beta }_{nv}-\hat{B}_{szs,1}^{-1}\left( \hat{B}_{szs,1}+\hat{B}_{szs,2}\right) \hat{\beta }_{nv}\right\} \\= & {} \sqrt{n}\left( \hat{B}_{bc,1}^{-1}-\hat{B}_{szs,1}^{-1}\right) \left( \hat{B}_{bc,1}+\hat{B}_{bc,2}\right) \hat{\beta }_{nv}\\= & {} o_p(1). \end{aligned}$$

Thus, \(\hat{\beta }_{bc}\) and \(\hat{\beta }_{szs}\) are asymptotically equivalent, and they have the same asymptotic distribution.

1.4 Appendix 4

Following Carroll et al. (2006), we estimate \({\Sigma }_{xx}\), \({\Sigma }_{xv}\), \({\Sigma }_{vv}\), \(\mu _x\), and \(\mu _{v}\) by \(\hat{\Sigma }_{xx}\), \(\hat{\Sigma }_{xv}\), \(\hat{\Sigma }_{vv}\), \(\bar{W}_{\cdot \cdot }={\sum _{i=1}^n\sum _{j=1}^{n_i}W_{ij}}/{\sum _{i=1}^n n_i}\) and \(\bar{V}_{\cdot }(t)\), respectively, where

$$\begin{aligned} \hat{\Sigma }_{vv}= & {} ({n-1})^{-1}\sum _{i=1}^n\left( V_i(t)-\bar{V}_{\cdot }(t))(V_i(t)-\bar{V}_{\cdot }(t)\right) ^T,\\ \hat{\Sigma }_{xv}= & {} \frac{\sum _{i=1}^n n_i}{\left( \sum _{i=1}^n n_i\right) ^2-\sum _{i=1}^n n_i^2}\sum _{i=1}^n n_i \left( \bar{W}_{i \cdot }-\bar{W}_{\cdot \cdot }\right) \left( V_i(t)-\bar{V}_{\cdot }(t)\right) ^T,\\ \text {and} \ \ \hat{\Sigma }_{xx}= & {} \frac{\sum _{i=1}^n n_i}{\left( \sum _{i=1}^n n_i\right) ^2-\sum _{i=1}^n n_i^2}\left[ \sum _{i=1}^n n_i \left( \bar{W}_{i \cdot }-\bar{W}_{\cdot \cdot }\right) \left( \bar{W}_{i \cdot }-\bar{W}_{\cdot \cdot }\right) ^T-(n-1)\hat{\Sigma }_0\right] . \end{aligned}$$

1.5 Appendix 5: Proof of Theorem 2

First, we consider the case where all \(n_i\) are equal. Note that

$$\begin{aligned} \hat{\beta }_{rc}= & {} \left[ \sum _{i=1}^n\int _0^{\tau } Y_i(t)\hat{A}_{rc,i}\left\{ \hat{Z}_{i}(t)-\tilde{Z}(t)\right\} ^{\otimes 2}\hat{A}_{rc,i}^Tdt\right] ^{-1}\\&\left[ \sum _{i=1}^n\int _0^{\tau }\hat{A}_{rc,i}\left\{ \hat{Z}_i(t)-\tilde{Z}(t)\right\} dN_i(t)\right] \\= & {} \hat{A}_{rc,1}^{-1T}\left[ \sum _{i=1}^n\int _0^{\tau } Y_i(t)\left\{ \hat{Z}_{i}(t)-\tilde{Z}(t)\right\} ^{\otimes 2}dt\right] ^{-1}\hat{A}_{rc,1}^{-1}\hat{A}_{rc,1}\\&\left[ \sum _{i=1}^n\int _0^{\tau }\left\{ \hat{Z}_i(t)-\tilde{Z}(t)\right\} dN_i(t)\right] =\left( \hat{A}_{rc,1}^{-1}\right) ^T\hat{\beta }_{nv}. \end{aligned}$$

It follows that \(\hat{\beta }_{rc}\mathop {\longrightarrow }\limits ^{a.s.}\beta _{rc}^*\), as \(n\rightarrow \infty \). Together with the fact that

$$\begin{aligned} \sqrt{n}\left\{ \left( \hat{A}_{rc,1}^{-1}\right) ^T-\left( {A}_{rc,1}^{-1}\right) ^T\right\} \left( \hat{\beta }_{nv}-\beta _{nv}^*\right) =o_p(1), \end{aligned}$$

we obtain

$$\begin{aligned}&\sqrt{n}\left( \hat{\beta }_{rc}-{\beta }_{rc}^*\right) = \sqrt{n}\left\{ \left( \hat{A}_{rc,1}^{-1}\right) ^T\hat{\beta }_{nv}-\left( {A}_{rc,1}^{-1}\right) ^T{\beta }_{nv}^*\right\} \nonumber \\&\qquad =\sqrt{n}\left( {A}_{rc,1}^{-1}\right) ^T\left( \hat{\beta }_{nv}-{\beta }_{nv}^*\right) - \sqrt{n}\left\{ \left( \hat{A}_{rc,1}^{-1}\right) ^T-\left( {A}_{rc,1}^{-1}\right) ^T\right\} {\beta }_{nv}^*+o_p(1). \nonumber \\ \end{aligned}$$
(17)

Since the asymptotic expansion of \(\sqrt{n}({A}_{rc,1}^{-1})^T(\hat{\beta }_{nv}-{\beta }_{nv}^*)\) is obtained in Theorem 1, thus, to obtain the asymptotic expansion of \(\sqrt{n}(\hat{\beta }_{rc}-{\beta }_{rc}^*)\), by (17) we need only to examine \(\sqrt{n}\{(\hat{A}_{rc,1}^{-1})^T-({A}_{rc,1}^{-1})^T\}{\beta }_{nv}^*\). This can be done, in principle, by a Taylor series expansion. In the following, we first study the asymptotic expansion for the univariate case, and then the multivariate case.

By considering a Taylor series expansion, we obtain

$$\begin{aligned} \sqrt{n}\left\{ \left( \hat{A}_{rc,1}^{-1}\right) ^T-\left( {A}_{rc,1}^{-1}\right) ^T\right\}= & {} \sqrt{n}\left( \frac{\hat{\Sigma }_{xx}+\hat{\Sigma }_0/n_1}{\hat{\Sigma }_{xx}} -\frac{{\Sigma }_{xx}+{\Sigma }_0/n_1}{{\Sigma }_{xx}}\right) \\= & {} \frac{\sqrt{n}\hat{\Sigma }_0/n_1}{{\Sigma }_{xx}}- \frac{{\Sigma }_0/n_1}{{\Sigma }_{xx}^2}\sqrt{n}\hat{\Sigma }_{xx}+o_p(1). \end{aligned}$$

Noting that

$$\begin{aligned} \sqrt{n}\hat{\Sigma }_{xx}= & {} \sqrt{n} \left\{ \frac{\sum _{i=1}^n\left( \bar{W}_{i\cdot }-\bar{W}_{\cdot \cdot }\right) ^2}{n-1}\right\} -\sqrt{n}\hat{\Sigma }_0/n_1\\= & {} \sqrt{n} \left\{ \frac{\sum _{i=1}^n\left( \bar{W}_{i\cdot }-\mu _x\right) ^2}{n}\right\} -\sqrt{n}\left\{ \frac{\sum _{i=1}^n\sum _{r=1}^{n_i}\left( W_{ir}-\bar{W}_{i\cdot }\right) ^{\otimes 2}}{n_1\sum _{i=1}^n(n_i-1)}\right\} +o_p(1), \end{aligned}$$

together with (17), we obtain that \(\sqrt{n}(\hat{\beta }_{rc}-{\beta }_{rc}^*)\) is a sum of independent terms asymptotically, and thus, \(\hat{\beta }_{rc}\) is asymptotic normal with mean \({\beta }_{rc}^*\) and variance \(\Sigma _{rc}^*\), which can be obtained easily by (17).

For the multivariate case, \(\hat{\beta }_{rc}\) is still asymptotic normal with mean \({\beta }_{rc}^*\) and covariance matrix \(\Sigma _{rc}^*\) whose form is complicated. We suggest to use the first term of (17) only to obtain an approximate variance of \(\hat{\beta }_{rc}\): \(\Sigma _{rc}^*\approx [{A}_{rc,1}^{-1}]^T\mathcal {D}_{nv}^{-1}\Sigma _{nv}\mathcal {D}_{nv}^{-1}[{A}_{rc,1}^{-1}]\), which can be consistently estimated by the empirical counterpart.

For the general case where the \(n_i\) are not necessarily equal, it can be shown that \(\hat{\beta }_{rc}\) is still asymptotically normal, with a complicated form for the asymptotic covariance.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, Y., Yi, G.Y. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments. Lifetime Data Anal 22, 321–342 (2016). https://doi.org/10.1007/s10985-015-9340-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10985-015-9340-1

Keywords

Navigation