1 Introduction

Truncation is common in survival analysis where the incomplete nature of the observations is due to a systematic biased selection process originated in the study design. Right truncated data arise naturally when an incubation period (i.e., the time between disease incidence and the onset of clinical symptoms) cannot be observed completely in a retrospective study. In survival analysis, right truncation will lead to biased sampling in which shorter observations will be oversampled (Gürler 1996). For example, to study AIDS caused by blood transfusion (Lagakos et al. 1988), the incubation period is the time from a contaminated blood transfusion to the time when symptoms and signs of AIDS are first apparent. However, in those studies, the following-up period are usually limited. Therefore, only those developed AIDS before the end of study can be identified.

Many authors have studied right truncated data: Woodroofe (1985) and Wang et al. (1986) focused on the asymptotic properties the product limit estimator under random truncation. Keiding and Gill (1990) studied asymptotic properties of random left truncation estimator by a reparametrization of the left truncation model as a three-state Markov process. Lagakos et al. (1988) considered nonparametric estimation and inference of right truncated data by treating the process in reverse time, they showed that \(\lambda ^{B}(t)=\lambda (\tau -t)\), where \(\tau \) is the study duration, \(\lambda ^{B}(t)\) and \(\lambda (t)\) are reverse-time hazard and forward-time hazard, respectively. The authors also discussed the implications and limitations of introducing reverse time hazard to analyze right truncated data. Gross (1992) further explained the necessity of reverse time hazard in the Cox model setting.

However, in most of the current literature, researchers study right truncated data in nonparametric setting, fairly few studied semiparametric models, among them, Kalbfleisch and Lawless (1989) formulating the Cox model on the reverse time hazard (or retro hazard, Lagakos et al. (1988); Keiding and Gill (1990)). For other related work on reverse time hazard, please refer to Gross (1992); Chen et al. (2004), among others.

In this paper, we study right truncated data under a semiparametric proportional odds model. Different from a proportional hazards model, the reverse-time hazard in proportional odds model has a simple log-linear relationship with the forward-time hazard, which leads to an intuitive estimator. While Sundaram (2009)’s method can also be adapted to proportional odds model for right truncated data, she focused on applying a reversed-time argument to an estimator for left truncated data. Our estimator, on the other hand, utilize a direct relationship between the reverse-time hazard, the forward-time hazard and the baseline odds function, so that we obtain a simpler estimator. Weighted functions are also being inserted into the estimating equation to obtain more efficient estimates.

The rest of the paper is organized as follows. Section 2 describes the inference procedure as well as asymptotic results, Sect.  3 shows simulation and real data results, Sect. 5 provides some discussion. Proof of theorems are left into the Appendix part.

2 Inference procedure

Assume that the failure time of interest T follows the semiparametric proportional odds model:

$$\begin{aligned} \log \left\{ \frac{1-S(t\mid Z)}{S(t\mid Z)}\right\} =\alpha (t)+Z^\top \beta , \end{aligned}$$
(1)

and the observed failure time is subject to a right truncation time variable R. The observed data is \((T_i,R_i),~i=1,\ldots ,n\), where \(T_i\le R_i\). Let \(\tau \) be the study duration, which is greater than \(\max \{T_1,T_2,\ldots ,T_n\}\). An (observed) reverse-time sample, \((T_i^{*},R_i^{*}),~i=1,\ldots ,n\) can be constructed, where \(T^{*}=\tau -T,~R^{*}=\tau -R\), so that \(T^{*}\) is left truncated by the variable \(R^{*}\). Denote \(({\tilde{T}}^{*},{\tilde{R}}^{*})\) as the reverse-time sample (potentially truncated). Then the hazard function of \({\tilde{T}}^{*}\) is a quantity originated in \(\tau \) and counts backward in time. The reverse hazard and cumulative reverse hazard function of backward recurrence time is defined as

$$\begin{aligned}{} & {} \lambda ^B(t\mid Z)=\lim _{\Delta t\rightarrow 0}\frac{\text {Pr}\{{\tilde{T}}^{*}\in (t-\Delta t,t]\mid {\tilde{T}}^{*}\le t, Z\}}{\Delta t}=\frac{f(t\mid Z)}{F(t\mid Z)},\\{} & {} \Lambda ^B(t\mid Z)=\int _t^{\tau }\lambda ^B(s\mid Z)ds. \end{aligned}$$

We would like to mention that a similar definition of the reverse hazard can also be found in Kalbfleisch and Lawless (1989) and Jiang (2011). Denote \(v(t)=\exp (\alpha (t))\), and \(\lambda (t)=f(t)/S(t)\) as the forward-time hazard, then

$$\begin{aligned} \log \lambda (t\mid Z)-\log \lambda ^{B}(t\mid Z)= & {} \alpha (t)+Z^\top \beta , \lambda ^B(t\mid Z)\\ {}= & {} \frac{1}{\{1+v(t)\exp (Z^\top \beta )\}v(t)}\frac{dv(t)}{dt}. \end{aligned}$$

Consider the counting process

$$\begin{aligned} N_i(t)=I(t\le T_i\le R_i), Y_i(t)= I(T_i\le t\le R_i), \end{aligned}$$

and denote

$$\begin{aligned} M_i(t,\beta )=N_i(t)-\int _t^{\tau }Y_i(s)\frac{1}{\{\exp (Z_i^\top \beta )v(s)+1\}v(s)}dv(s). \end{aligned}$$

Then \(M_i(t,\beta )\) is a martingale with respect to the self-exciting (canonical) filtration (Keiding and Gill 1990; Stralkowska-Kominiak and Stute 2009) and

$$\begin{aligned} M_i(dt,\beta )=dN_i(t)+Y_i(t)\frac{1}{\{\exp (Z_i^\top \beta )v(t)+1\}v(t)}dv(t). \end{aligned}$$
(2)

Multiply both sides of (2) by \(\{\exp (Z_i^\top \beta )v(t)+1\}\) and summing over n observations,

$$\begin{aligned}{} & {} \sum _{i=1}^n\{\exp (Z_i^\top \beta )v(t)+1\}dN_i(t)+\sum _{i=1}^n Y_i(t)\frac{dv(t)}{v(t)}\nonumber \\{} & {} \qquad =\sum _{i=1}^n\{\exp (Z_i^\top \beta )v(t)+1\}M_i(dt,\beta ). \end{aligned}$$
(3)

Divide both left-hand side and right-hand side by \(\sum _{i=1}^nY_i(t)\), we obtain:

$$\begin{aligned} \frac{\sum _{i=1}^n\{\exp (Z_i^\top \beta )v(t)+1\}d N_i(t)}{\sum _{i=1}^n Y_i(t)}+\frac{d v(t)}{v(t)}=\frac{\sum _{i=1}^n\{\exp (Z_i^\top \beta )v(t)+1\}M_i(d t,\beta )}{\sum _{i=1}^n Y_i(t)}. \end{aligned}$$

which is equivalent to:

$$\begin{aligned}{} & {} v(t)\frac{\sum _{i=1}^n\exp (Z_i^\top \beta )d N_i(t)}{\sum _{i=1}^n Y_i(t)}+\frac{\sum _{i=1}^n d N_i(t)}{\sum _{i=1}^n Y_i(t)}+\frac{d v(t)}{v(t)}\nonumber \\{} & {} \qquad = \sum _{i=1}^n\frac{\exp (Z_i^\top \beta )v(t)+1}{\sum _{i=1}{n}Y_i(t)}M_i(dt,\beta ). \end{aligned}$$
(4)

Denote the left-hand side of (4) as:

$$\begin{aligned} U(\beta ,dt)=\frac{dv(t)}{v(t)}+p_n(t)dt-q_n(t,\beta )v(t)dt, \end{aligned}$$

where

$$\begin{aligned} p_n(t)dt=\frac{\sum _{i=1}^ndN_i(t)}{\sum _{j=1}^nY_j(t)}, q_n(t,\beta )dt=-\frac{\sum _{i=1}^n\exp (Z_i^\top \beta )dN_i(t)}{\sum _{j=1}^nY_j(t)}. \end{aligned}$$

From standard counting process arguments (Anderson and Gill, 1982;Aalen10), we know that the stochastic integral with respect to the counting process martingale \(M_i(dt,\beta )\) is also a martingale, motivate by the following equation

$$\begin{aligned} E\left[ \frac{1}{n}U(\beta ,dt)\right] = E\left[ \frac{1}{n}\sum _{i=1}^n\frac{\exp (Z_i^\top \beta )v(t)+1}{\sum _{i=1}{n}Y_i(t)}M_i(dt,\beta ) \right] . \end{aligned}$$

We construct the following estimating equation

$$\begin{aligned} \frac{1}{n}U(\beta ,dt)=0. \end{aligned}$$
(5)

Only v(t) is unknown in (5), let the estimate of v(t) be \({\hat{v}}_n(t,\beta )\). Denote

$$\begin{aligned} P_n(t)=\exp \left\{ \int _t^{\tau }\frac{\sum _{i=1}^ndN_i(s)}{\sum _{j=1}^nY_j(s)} \right\} , Q_n(t,\beta )=\int _t^{\tau }\frac{\sum _{i=1}^n\exp (Z_i^\top \beta )dN_i(s)}{\sum _{j=1}^nY_j(s)}, \end{aligned}$$

then

$$\begin{aligned} {\hat{v}}_n(t,\beta ) =\frac{P_n(t)}{\int _t^{\tau }P_n(s)Q_n(ds,\beta )}. \end{aligned}$$
(6)

Multiply (2) by \(Z_i\{\exp (Z_i^\top \beta )v(t)+1\}/n\) and summing over n observations, we obtain

$$\begin{aligned}{} & {} \frac{1}{n}\sum _{i=1}^nZ_i\left[ \{\exp (Z_i^\top \beta )v(t)+1\}dN_i(t)+ Y_i(t)\frac{dv(t)}{v(t)}\right] \nonumber \\{} & {} \qquad =\frac{1}{n}\sum _{i=1}^nZ_i\{\exp (Z_i^\top \beta )v(t)+1\}M_i(dt,\beta ). \end{aligned}$$
(7)

By virtue of the same idea of (5), take integration on both sides of (7), we can also construct another equation:

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^n\int _0^{\tau }Z_i\left[ \{\exp (Z_i^\top \beta )v(t)+1\}dN_i(t)+ Y_i(t)\frac{dv(t)}{v(t)}\right] =0 \end{aligned}$$
(8)

Substituting (6) into (8), we can obtain the estimate of \(\beta \) by solving the following equation:

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^n\int _0^{\tau }Z_i\left[ \left\{ \exp (Z_i^\top \beta ){\hat{v}}_n(t,\beta )+1 \right\} dN_i(t) +Y_i(t)\frac{{\hat{v}}_n(dt,\beta )}{{\hat{v}}_n(t,\beta )} \right] =0. \end{aligned}$$

Moreover, since

$$\begin{aligned} \frac{{\hat{v}}_n(dt,\beta )}{{\hat{v}}_n(t,\beta )}=-\frac{\sum _{k=1}^ndN_k(t)}{\sum _{l=1}^nY_l(t)} -\frac{\sum _{k=1}^n\exp (Z_k^\top \beta )dN_k(t)}{\sum _{l=1}^nY_l(t)}{\hat{v}}_n(t,\beta ), \end{aligned}$$

then

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^n\int _0^{\tau }\left\{ Z_i-{\bar{Z}}(t)\right\} \left\{ \exp (Z_i^\top \beta ){\hat{v}}_n(t,\beta )+1\right\} dN_i(t)=0, \end{aligned}$$

where

$$\begin{aligned} {\bar{Z}}(t)=\frac{\sum _{i=1}^nZ_iY_i(t)}{\sum _{j=1}^nY_j(t)}. \end{aligned}$$

Finally, let

$$\begin{aligned} S_n(\beta )=\frac{1}{n}\sum _{i=1}^n\int _0^{\tau }\left\{ Z_i-{\bar{Z}}(t)\right\} \left\{ \exp (Z_i^\top \beta ){\hat{v}}_n(t,\beta )+1\right\} dN_i(t), \end{aligned}$$
(9)

and denote the solution of \(S_n(\beta )=0\) be \({\hat{\beta }}_n\), we have the following theorem:

Theorem 1

Under assumptions A1-A4 in the Appendix, \(\sqrt{n}({\hat{\beta }}_n-\beta _0)\) converges weakly to a mean-zero normal distribution, with covariance matrix \(U^{-1}V(U^{-1})^\top \), where V is the covariance matrix of \(\sqrt{n}S_n(\beta _0)\), \(U=\lim _{n\rightarrow \infty }\{\partial S_n(\beta )/\partial \beta \}\mid _{\beta =\beta _0}\). The kth row of U is:

$$\begin{aligned}{} & {} \mathop {\lim }\limits _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n\int _0^{\tau }\left\{ Z_i-{\bar{Z}}(t)\right\} \\{} & {} \quad \times \left\{ Z_{ik}\exp (Z_i^\top \beta _0){\hat{v}}_n(t,\beta _0)+\exp (Z_i^\top \beta _0) \frac{\partial {\hat{v}}_n(t,\beta )}{\partial \beta _k}\mid _{\beta =\beta _0} \right\} dN_i(t). \end{aligned}$$

Remark

For proportional odds model with the normal logit link:

$$\begin{aligned} \log \left\{ \frac{S(t\mid Z)}{1-S(t\mid Z)}\right\} =\alpha (t)+Z^\top \beta . \end{aligned}$$
(10)

Define

$$\begin{aligned} {\tilde{M}}_i(t,\beta )=N_i(t)-\int _t^{\tau }Y_i(s) \frac{\exp (Z^\top \beta )}{1+\exp (Z^\top \beta )v(s)} dv(s), \end{aligned}$$

we claim that \({\tilde{M}}_i(t,\beta )\) is a martingale. Recall that \(v(t)=\exp (\alpha (t))\), following (10), we have

$$\begin{aligned} S(t|Z)=\frac{\exp (\alpha (t)+Z^\top \beta )}{1+\exp (Z^\top \beta )v(t)}, \end{aligned}$$

as a result, we can obtain

$$\begin{aligned} f(t|Z)=\frac{\exp (Z^\top \beta )v^{\prime }(t)}{(1+\exp (Z^\top \beta )v(t))^2}, F(t|Z)=\frac{1}{1+\exp (Z^\top \beta )v(t)}. \end{aligned}$$

Following the definition of reverse hazard in Sect.  2, we can write the reverse hazard as

$$\begin{aligned} {\tilde{\lambda }}^B(t|Z)=\frac{f(t|Z)}{F(t|Z)}=\frac{\exp (Z^\top \beta )v^{\prime }(t)}{1+\exp (Z^\top \beta )v(t)}. \end{aligned}$$

From the general definition of martingale in Fleming and Harrington (1991) (pp. 25), we can easily show that \({\tilde{M}}_i(t,\beta )\) is a martingale. While for model (1),

$$\begin{aligned} \lambda ^B(t|Z)=\frac{v^{\prime }(t)}{1+\exp (Z^\top \beta )v(t)}, \end{aligned}$$

and \(N_i(t)-\int _{t}^{\tau }Y_i(s)\lambda ^B(t|Z)dt\) is the martingale.

The corresponding estimating equation under model (10) has the following form

$$\begin{aligned} S_n^{(1)}(\beta )=\sum _{i=1}^n\int _0^{\tau }\left\{ Z_i-{\bar{Z}}(t,\beta )\right\} \left\{ \exp (Z_i^\top \beta ){\hat{v}}_n(t,\beta )+1 \right\} dN_i(t), \end{aligned}$$
(11)

where

$$\begin{aligned} {\bar{Z}}(t,\beta )=\frac{\sum _{i=1}^nZ_iY_i(t)\exp (Z_i^\top \beta )}{\sum _{j=1}^nY_j(t)\exp (Z_j^\top \beta )}. \end{aligned}$$

Equation (11) also can be used to estimate \(\beta \), however, comparing with (9), (11) is more complicated and more computational intensive, while the derivative of (9) with respect to \(\beta \) can be easily obtained. As a result, (9) can be easily solved by the newton raphson algorithm. In the following simulations, we will use estimating equation (9).

In addition to the unweighted object function (9), weighted object function can also being included to obtain a class of weighted estimators of \(\beta _0\). This procedure is often used to minimize the sandwich estimate as well as improve the efficiency. The weighted version of object function is

$$\begin{aligned}{} & {} S_{n,W}(\beta )\nonumber \\{} & {} \quad =\frac{1}{n}\sum _{i=1}^n\int _0^{\tau }W_n(t)\left\{ Z_i-{\bar{Z}}(t)\right\} \left\{ \exp (Z_i^\top \beta ){\hat{v}}_n(t,\beta )+1\right\} dN_i(t)=0, \end{aligned}$$
(12)

here \(W_n(t)\) is a predictable weight function with respect to the canonical filtration which converges to a non-random function w(t). One of the common used weight function is the Prentice-Wilcoxon type function \(W_{n1}(t)={\hat{S}}_{LB}(t)\), where \({\hat{S}}_{LB}(\cdot )\) is the Lynden Bell estimate of the baseline survival function for right truncated failure time data. Denote the corresponding estimate of \(\beta \) as \({\hat{\beta }}_{n,w}\). Then we have the following theorem:

Theorem 2

Under the same assumptions as Theorem 1, when \(n\rightarrow \infty \), for a prespecified weight function \(W_n(\cdot )\rightarrow w(\cdot )\), \(\sqrt{n}({\hat{\beta }}_{n,w}-\beta _0)\) converges weakly to a mean-zero normal distribution, with covariance matrix \(U_w^{-1}V_w(U_w^{-1})^\top \), where \(V_w\) is the covariance matrix of \(\sqrt{n}S_{n,w}(\beta _0)\), \(U_w=\lim _{n\rightarrow \infty }\{\partial S_{n,w}(\beta )/\partial \beta \}\mid _{\beta =\beta _0}\). The kth row of \(U_w\) is:

$$\begin{aligned}{} & {} \mathop {\lim }\limits _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n\int _0^{\tau } W_n(t)\left\{ Z_i-{\bar{Z}}(t)\right\} \left[ Z_{ik}\exp (Z_i^\top \beta _0){\hat{v}}_n(t,\beta _0)\right. \\{} & {} \quad \left. +\exp (Z_i^\top \beta _0) \left\{ \partial {\hat{v}}_n(t,\beta )/\partial \beta _k\right\} \mid _{\beta =\beta _0} \right] dN_i(t). \end{aligned}$$

Recently, many people considered problem of finding the optimal weight in a weighted estimating equation, including Chen and Cheng (2005); Chen and Wang (2000); Chen et al. (2012), among others. To achieve this goal, we only need to find the w(t) such that \(U_w(\beta _0)^{-1}V_w(\beta _0)U_w(\beta _0)^{-1}\) achieves the minimum. Since both the empirical weight function \(W_n(t)\) and its limit w(t) do not rely on unknown parameter \(\beta _0\), it is reasonable to set \(\beta _0=0\). Another explanation for letting \(\beta _0=0\) is that it represents the baseline distribution. Therefore, let \(\beta _0=0\), then we have:

$$\begin{aligned} U_w(\beta _0){} & {} =\mathop {\lim }\limits _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n\int _0^{\tau } W_n(t)\left\{ Z_i-{\bar{Z}}(t)\right\} Z_{i}\exp (Z_i^\top \beta _0)v(t)dN_i(t)\nonumber \\{} & {} =\mathop {\lim }\limits _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n\int _0^{\tau } W_n(t)\left\{ Z_i-{\bar{Z}}(t)\right\} ^{\otimes 2} \exp (Z_i^\top \beta _0)v(t)dN_i(t)\nonumber \\{} & {} =\mathop {\lim }\limits _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n\int _0^{\tau } W_n(t)\left\{ Z_i-{\bar{Z}}(t)\right\} ^{\otimes 2}Y_i(t) \frac{\exp (Z_i^\top \beta _0)v^{\prime }(t)}{\exp (Z_i^\top \beta _0)v(t)+1} dt\nonumber \\{} & {} =\mathop {\lim }\limits _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n\int _0^{\tau } W_n(t)\left\{ Z_i-{\bar{Z}}(t)\right\} ^{\otimes 2}Y_i(t) \frac{v^{\prime }(t)}{v(t)+1} dt. \end{aligned}$$
(13)
$$\begin{aligned} V_w(\beta _0){} & {} =\mathop {\lim }\limits _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n\int _0^{\tau } W_n(t)^2\left\{ Z_i-{\bar{Z}}(t)\right\} ^{\otimes 2}\nonumber \\{} & {} \quad \times \left\{ \exp (Z_i^\top \beta _0)v(t)+1\right\} ^2Y_i(t)\frac{1}{\left\{ \exp (Z_i^\top \beta _0)v(t)+1\right\} v(t)}v^{\prime }(t)dt\nonumber \\{} & {} =\mathop {\lim }\limits _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n\int _0^{\tau } W_n(t)^2\left\{ Z_i-{\bar{Z}}(t)\right\} ^{\otimes 2}Y_i(t)\left\{ \exp (Z_i^\top \beta _0)v(t)+1\right\} \frac{v^{\prime }(t)}{v(t)}dt\nonumber \\{} & {} =\mathop {\lim }\limits _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n\int _0^{\tau } W_n(t)^2\left\{ Z_i-{\bar{Z}}(t)\right\} ^{\otimes 2}Y_i(t)\left\{ v(t)+1\right\} \frac{v^{\prime }(t)}{v(t)}dt \end{aligned}$$
(14)

Apply the Cauchy-Schwarz inequality to \(U_w(\beta _0)^{-1}V_w(\beta _0)U_w(\beta _0)^{-1}\) and let \(\beta _0=0\), then it follows that the optimal weight is proportional to

$$\begin{aligned} w(t)=\frac{v(t)}{(v(t)+1)^2}=S(t)\left\{ 1-S(t)\right\} , \end{aligned}$$
(15)

which minimize the variance of \({\hat{\beta }}_n\). Since when (15) holds, we have

$$\begin{aligned}{} & {} U_w(\beta _0)=\mathop {\lim }\limits _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n\int _0^{\tau } \left\{ Z_i-{\bar{Z}}(t)\right\} ^{\otimes 2}Y_i(t) \frac{v(t)v^{\prime }(t)}{(v(t)+1)^3} dt,\\{} & {} V_w(\beta _0)=\mathop {\lim }\limits _{n\rightarrow \infty }\frac{1}{n}\sum _{i=1}^n\int _0^{\tau } \left\{ Z_i-{\bar{Z}}(t)\right\} ^{\otimes 2}Y_i(t)\frac{v(t)v^{\prime }(t)}{(v(t)+1)^3}dt, \end{aligned}$$

which means when \(\beta _0=0\), given \(w(t)=S(t)\{1-S(t)\}\), we have \(U_w(\beta _0)^{-1}V_w(\beta _0)U_w(\beta _0)^{-1}\) achieves the minimum value \(U_w(\beta _0)^{-1}\) (or equivalently \(V_w(\beta _0)^{-1}\)).

In simulation, let \(W_{n2}(t) ={\hat{S}}_{LB}(t)\left\{ 1-{\hat{S}}_{LB}(t)\right\} \), the results are shown in Table , it can be seen that the weight \(W_{n2}(t)\) achieve the minimal variance among the three estimators.

3 Simulation and real data

We perform simulation studies to evaluate the finite sample properties of the proposed estimator. In simulation, let \(\alpha (t)=3\log t\), \(\beta _0=(1,0.5)^\top \), \(Z_1\) is a continuous variable follows a uniform distribution from 0 to 2, \(Z_2\) is a discrete variable follows a Bernoulli distribution with probability 0.5. The failure time variable is generated from model (1). The right truncation variable follows a uniform distribution from 0 to 4. This makes the truncation rate equals to 20%. For each simulation, 1000 datasets are generated, in each dataset, there are n observations, \(n=300,400,500, 600\), respectively. \(W_{n1}(t)\) and \(W_{n2}(t)\) are chosen as the weight functions in weighted estimating equations. As is shown in Table 1, three estimation equations yield unbiased estimates and the empirical coverage probability is around nominal level 95%, when weighted function is incorporated into the estimation equation, the efficiency is greatly improved, and the variance achieve minimal for \(W_{n2}(t)\) under three estimates.

As pointed out by one of the referees and the associate editor, Shen et al. (2017) also studied right truncated data under linear transformation models, and we know that when the error term in the linear transformation model follows logistic distribution (Fine et al., 1998), the model becomes the proportional odds model. Let

$$\begin{aligned}{} & {} N_i^{\dag }(t)=I(\tau -T_i\le t)=I(T_i\ge \tau -t),\\{} & {} Y_i^{\dag }(t)=I(\tau -R_i\le t\le \tau -T_i)=I(T_i\le \tau -t\le R_i), \end{aligned}$$

then the estimating equations (3) and (4) in Shen et al. (2017) can be written as

$$\begin{aligned}{} & {} U(\beta ,\alpha (\tau -t))\\{} & {} \quad =\sum _{i=1}^{n}\int _{-\infty }^{\tau }Z_i\left[ dN_i(t)-Y_i(t)d\left( \log \frac{\exp (Z_i^\top \beta +\alpha (\tau -t))}{1+\exp (Z_i^\top \beta +\alpha (\tau -t))} \right) \right] =0,\\{} & {} \qquad \sum _{i=1}^n\left[ dN_i(t)-Y_i(t)d\left( \log \frac{\exp (Z_i^\top \beta +\alpha (\tau -t))}{1+\exp (Z_i^\top \beta +\alpha (\tau -t))} \right) \right] =0. \end{aligned}$$

We recognize that Shen et al. (2017)’s methodology is general and works for all the linear transformation models, including the proportional odds model. However, our approach will be more convenient compared with Shen et al. (2017)’s under the proportional odds model, since our approach has a simpler form, and the estimation of the intercept \(\alpha (t)\) can be done beforehand and plugged in the final estimating equation, while Shen et al. (2017) can not achieve this and their estimation produce involves a complicated iteration which increases the risk of non-convergence. Besides, Shen et al. (2017) only deal with the reverse time but not the reverse hazard function, and we utilize the relationship between the reverse hazard function and the forward-time hazard function and produced a more intuitive estimator.

We conduct simulations for Shen et al. (2017)’s method and the results are reported in Table 1. The code was obtained from the authors via personal communication. However, one of the authors, Prof. Pao-Sheng Shen mentioned that they were unable to calculate the asymptotic variance and coverage probabilities, the existing results in their paper contain some errors, and their current code only consists of bias and standard error. As a result, we only report bias and standard error of Shen et al. (2017)’s method. All the simulations were conducted under the same model as ours. We also want to mention that we found the computation speed is very slow for Shen et al. (2017)’s method, though asymptotic variance and coverage probability were not calculated, their method is still more than 3 times slower than ours under the same model setting and the sample size. The SSE of Shen et al. (2017)’s method is smaller than our unweighted estimator, but is bigger than the two weighted estimators. For the second approach in their paper, i.e. the conditional maximum-likelihood approach, since the bias is large, we did not perform further comparisons here. We would like to mention that the large bias of the conditional maximum-likelihood approach is also confirmed in Vakulenko-Lagun et al. (2020).

As suggested by one of the reviewers, we also perform simulations without accounting for the truncation, and the results are shown in Table . We choose the truncation distribution as uniform distributions from 0 to 4, 2 and 1, respectively, which corresponds to 20% truncation rate (mild truncation), 40% truncation rate (moderate truncation) as well as 70% truncation rate (heavy truncation). As we can see from Table 2, all the estimators are biased, and a larger truncation rate will lead to a bigger bias and variance, though for the same truncation, variances will decrease when the sample sizes increase. These results also coincide with Table 2.1 (pp. 20) in Rennert (2018) and Table 1 in Rennert and Xie (2018), though the two articles deal with the doubly truncated data under the Cox model.

Table 1 Simulation results
Table 2 Simulation results when ignoring truncation

To better illustrate how to employ the proposed method in real situation, we analyze the Centers for Disease Control’s blood-transfusion data, this data was used by Kalbfleisch and Lawless (1989) and Wang (1989). The data include 494 cases reported to the Center of Disease Control prior January, 1, 1987, and diagnosed before July, 1, 1986. Only 295 of the 494 has consistent data, and they got infection by a single blood transfusion or a short series of transfusions, analyse is restricted to this subset. We obtain the raw observation data via personal communication, Thomas Peterman, Centers for Disease Control and Prevention. The data contains three variables: T is the time from blood transfusion to the diagnosis of AIDS (in months), R is the time from blood transfusion to the end of the study (July, 1986, in months), Age is the age of the person when transfusing blood (in years). Comparing the data with Kalbfleisch and Lawless (1989)’s as well as Wang (1989)’s, the observation (X=16, T=33, Age=34) cannot be found in the raw data, thus is being deleted and the final sample size is 294, and a few fractions of the data are also corrected because these entries are not correct compared to the raw data.

We apply the proposed method to this data and treat Age as the covariate in regression. In Wang (1989)’s paper, the data are categorized into three age groups: ‘children’ aged 1-4, ‘adults’ aged 5-59, and ‘elderly patients’ aged 60 and older because of different patterns of survivorship, the survivor behaviour of groups ‘adults’ and ‘elderly patients’ are similar except for the right tail while there is an evident distinction compared with ‘children’, in current analysis, we delete the data from ‘children’, and focus on a combined sample of ‘adults’ and ‘elderly patients’ with a sample size equal to 260. Finally, the range of T is from 0 to 89, and the range of R is from 0 to 99. For all \(i\in \{1,\dots ,260\}\), we have \(T_i\le R_i\). As a result, our dataset will not have the identifiability issue as mentioned in Seaman et al. (2022). We also applied Shen et al. (2017)’s method and the result is similar. All the results are shown in Table , where the weights are chosen as \(W_{n1}(t)\) and \(W_{n2}(t)\), the estimated parameter between unweighted and weighted estimation equation does not show much difference, but the variance is reduced when weights are considered. In both situations mentioned above, Age has a very weakening positive effect on the odds ratio, but the effect is not significant.

Table 3 Age effect for blood transfusion data

4 Discussion

Directly consider the right truncated data in normal time order can be failed because ‘at risk’ process is not adapt to the history of the process (Gross 1992). Retro hazard solves this problem which transform right truncated data to left truncated in reverse time (Woodroofe 1985). Statistical modelling is even more flexible by incorporating the nature structure of proportional odds model. The usual form of proportional odds model can also be utilized but the theoretical and computational burden for the estimator will be increased, employ (1) can substantially improve the situation.