1 Introduction

The Weibull distribution is often used in survival analysis as well as reliability theory, see e.g., Kalbfleisch and Prentice (2011). This flexible distribution is a popular model which allows for constant, increasing and decreasing hazard rates. The Weibull distribution is also frequently applied in various engineering fields, including electrical and industrial engineering to represent, for example, manufacturing times, see Jiang and Murthy (2011). As a result of its wide range of practical uses, a number of goodness-of-fit tests have been developed for the Weibull distribution; see e.g, Mann et al. (1973), Tiku and Singh (1981), Liao and Shimokawa (1999), Cabaña and Quiroz (2005) as well as Krit (2014).

The papers listed above deal with testing for the Weibull distribution in the full sample case; i.e., where all lifetimes are observed. However, random right censoring often occurs in the fields mentioned above. For example, we may study the duration that antibodies remain detectable in a patient’s blood after receiving a specific type of Covid-19 vaccine, i.e. the duration of the protection that the vaccine affords the recipient. When gathering the relevant data, we will likely not be able to measure this duration in all of the patients. For example, some may leave the study by emigrating to a different country while still having detectable antibodies. In this case, the exact time of interest is not observed. This situation is referred to as random right censoring, see e.g., Cox and Oakes (1984).

In the presence of censoring, testing the hypothesis that the distribution of the lifetimes is Weibull is complicated by the fact that an incomplete sample is observed. Balakrishnan et al. (2015) suggests a way to perform the required goodness-of-fit tests by transforming the censored sample to a complete sample. Another approach is to modify the test statistics used in the full sample case to account for the presence of censoring. Although fewer in number, tests for the Weibull distribution in the presence of random censoring are available in the literature. For example, Koziol and Green (1976) and Kim (2017) propose modified versions of the Cramér-von Mises test and the test proposed in Liao and Shimokawa (1999), respectively, for use with censored data.

Throughout this paper we are primarily interested in the situation where censoring is present; the results relating to the full sample case are treated as special cases obtained when all lifetimes are observed. Before proceeding some notation is introduced. Let \(X_1, \dots , X_n\) be independent and identically distributed (i.i.d.) lifetime variables with continuous distribution function F and let \(C_1, \dots , C_n\) be i.i.d. censoring variables with distribution function H, independent of \(X_1, \dots , X_n\). We assume non-informative censoring throughout. Let

$$\begin{aligned} T_j=\text{ min }(X_j,C_j) \ \ \ \ \ \ \text{ and } \ \ \ \ \ \ \delta _j= {\left\{ \begin{array}{ll} 1, &{} \text {if}\ X_j\le C_j, \\ 0, &{} \text {if}\ X_j > C_j. \end{array}\right. } \end{aligned}$$

Note that in the full sample case \(T_j=X_j\) and \(\delta _j=1\) for \(j=1, \dots , n\).

Based on the observed pairs \((T_j, \delta _j), \ j=1, \dots n\) we wish to test the composite hypothesis

$$\begin{aligned} H_0: X \sim Weibull(\lambda ,\theta ), \end{aligned}$$
(1)

for some unknown \(\lambda >0\) and \(\theta >0\). Here \(X\sim Weibull(\lambda ,\theta )\) refers to a Weibull distributed random variable with distribution function

$$\begin{aligned} F(x) = 1-e^{-\left( x/\lambda \right) ^\theta }, \ \ x>0. \end{aligned}$$

This hypothesis is to be tested against general alternatives. We will make use of maximum likelihood estimation to estimate \(\lambda \) and \(\theta \). The log-likelihood of the Weibull distribution is

$$\begin{aligned} \mathcal {L}(\theta ,\lambda |X_1,\dots ,X_n) = d\log (\theta )-d\theta \log (\lambda )+(\theta -1)\sum _{j=1}^n\delta _j\log (X_j)-\lambda ^{-\theta }\sum ^n_{j=1} X_j^{\theta }, \end{aligned}$$

where \(d=\sum ^n_{j=1} \delta _j\). In the full sample case \(d=n\). No closed form formulae for the maximum likelihood estimates \(\hat{\lambda }\) and \(\hat{\theta }\) exist, meaning that numerical optimisation techniques are required to arrive at parameter estimates.

Since the Weibull distribution has a shape parameter the first step for many goodness-of-fit tests for the Weibull distribution is to transform the data. If \(X\sim Weibull(\lambda ,\theta )\) then a frequently used transformation is log(X), which results in a random variable that is type I extreme value distributed with parameters \(\log (\lambda )\) and \(1/\theta \). The resulting transformed random variable is part of a location scale family, which is a desirable result when performing goodness-of-fit testing. We therefore have that if \(X \sim Weibull(\lambda ,\theta )\), then \(X^{(t)}=\theta (\log (X)-\log (\lambda ))\) follows a standard type I extreme value distribution with distribution function

$$\begin{aligned} G(x) = 1-\text {e}^{-\text {e}^{x}}, \ \ -\infty<x<\infty . \end{aligned}$$

We denote a random variable with this distribution function by EV(0, 1). As a result the hypothesis in (1) holds, if, and only if, \(X^{(t)}\sim EV(0,1)\). All of the test statistics considered make use of the transformed observed values

$$\begin{aligned} Y_j = \hat{\theta }\left[ \log (T_j)-\log (\hat{\lambda })\right] , \end{aligned}$$
(2)

with \(\hat{\lambda }\) and \(\hat{\theta }\) the maximum likelihood estimates of the Weibull distribution. Let

$$\begin{aligned} X_j^{(t)} = \hat{\theta }\left[ \log (X_j)-\log (\hat{\lambda })\right] . \end{aligned}$$

If \(X_1,\dots ,X_n\) are realised from a Weibull\((\lambda ,\theta )\) distribution, then \(X_1^{(t)},\dots ,X_n^{(t)}\) will approximately follow an EV(0, 1) distribution see e.g. Kotz and Nadarajah (2000). The resulting random variables are no longer independent. However, several properties of the classical testing procedures remain unaffected when performing this type of transformation; the interested reader is referred to Baringhaus and Henze (1991) as well as Gupta and Richards (1997) and the references therein for more details. The tests employed below are based on discrepancy measures between the calculated values of \(Y_1,\dots ,Y_n\) and the standard type I extreme value distribution. The order statistics of \(Y_1, \dots ,Y_n\) are denoted by \(Y_{(1)}< \cdots < Y_{(n)}\), while \(\delta _{(j)}\) represents the indicator variable corresponding to \(Y_{(j)}\).

The remainder of the paper is structured as follows. In Sect. 2, we propose two new classes of tests for the Weibull distribution for both the full sample and censored case. We also modify the test proposed in Krit (2014) to accommodate random right censoring. In the presence of censoring the null distribution of all the test statistics considered depend on the unknown censoring distribution, we therefore propose a parametric bootstrap procedure in Sect. 2 in order to compute critical values. Section 3 presents the results of a Monte Carlo study where the empirical powers of the newly proposed classes of tests as well as the newly modified test are compared to those of existing tests. The paper concludes in Sect. 4 with two practical applications; one concerning the survival times of patients diagnosed with a certain type of leukemia (no censoring is present in these data) and the other relates to observed leukemia remission times (in the presence of censoring). Some avenues for future research are also discussed.

2 Proposed test statistics

Our newly proposed classes of tests are based on the following theorem, which characterises the standard type I extreme value distribution.

Theorem 1

Let W be a random variable with absolutely continuous density and assume that \(E\left[ \text {e}^W\right] <\infty \). In this case \(W\sim EV(0,1)\), if, and only if,

$$\begin{aligned} E\left[ \left( it+1-\text {e}^W\right) \text {e}^{itW}\right] =0, \ \forall \ t \in R, \end{aligned}$$

with \(i=\sqrt{-1}\).

Proof

The ’if’ part of the theorem can easily be shown using direct calculation. The ’only if’ part is shown below.

Assume \(E\left[ \left( it+1-\text {e}^W\right) \text {e}^{itW}\right] =0\). From Fourier analysis we have that the Fourier transform of \(f'(w)\) is \(\text {e}^{itw}f'(w)=-itE[\text {e}^{itW}]\). This implies that

$$\begin{aligned} 0&= E\left[ \left( it+1-\text {e}^W\right) \text {e}^{itW}\right] \\&= \int _{-\infty }^{\infty } \left( it+1-\text {e}^{w}\right) \text {e}^{itw}f(w) \mathrm {d}w\\&= \int _{-\infty }^{\infty } \left[ -f'(w)+ \left( 1-\text {e}^{w}\right) f(w)\right] \text {e}^{itw} \mathrm {d}w, \end{aligned}$$

which is the Fourier transform of \(h(w):=-f'(w)+\left( 1-\text {e}^{w}\right) f(w)\). The fact that the Fourier transform of h is 0 implies that \(h(w)=0\) for all \(w\in R\). Thus, f satisfies the following differential equation \(f'(w)-\left( 1-\text {e}^{w}\right) f(w)=0\). Using separation of variables we have

$$\begin{aligned} \frac{f'(w)}{f(w)}=1-\text {e}^{w} \implies \text {log}(f(w))=w-\text {e}^{w}+c \implies f(w)=\text {e}^{w-\text {e}^{w}}, \end{aligned}$$

where the last step follows from the fact that f must be a density function and hence integrate to 1. This completes the proof. \(\square \)

Let w(t) be a non-negative, symmetric weight function. From Theorem 1, we have that

$$\begin{aligned} \eta&= \int _{-\infty }^{\infty } E\left[ \left( it+1-\text {e}^Y\right) \text {e}^{itY}\right] w(t)\mathrm {d}t \nonumber \\&= \int _{-\infty }^{\infty } \int _{-\infty }^{\infty } \left( it+1-\text {e}^y\right) \text {e}^{ity} \mathrm {d}G(y) w(t)\mathrm {d}t \end{aligned}$$
(3)

equals 0 if \(Y\sim EV(0,1)\). Note that the inclusion of the weight function, w, above is required to ensure that \(\eta \) is finite. Clearly \(\eta \) will be unknown because G is unknown. However, G can be estimated by the Kaplan-Meier estimator, \(G_n\), of the distribution function given by

$$\begin{aligned} 1-G_n(t)= {\left\{ \begin{array}{ll} 1, &{} t \le Y_{(1)} \\ \prod _{j=1}^{k-1}\left( \frac{n-j}{n-j+1}\right) ^{\delta _{(j)}}, &{} Y_{(k-1)}< t \le Y_{(k)}, \ \ \ \ k=2,\dots , n. \\ \prod _{j=1}^n \left( \frac{n-j}{n-j+1}\right) ^{\delta (j)}, &{} t>Y_{(n)}. \end{array}\right. } \end{aligned}$$

More details about this estimator can be found in Kaplan and Meier (1958), Efron (1967) as well as Breslow and Crowley (1974). In the full sample case this estimator reduces to the standard empirical distribution function, \(G_n(X_{(j)})=j/n\).

Let \(\varDelta _j\) denote the size of the jump in \(G_n(T_{(j)})\);

$$\begin{aligned} \varDelta _j=G_n(T_{(j)})-\lim _{t \uparrow T_{(j)}}G_n(t),j=1,\ldots ,n. \end{aligned}$$

Simple calculable expressions for the \(\varDelta _j\)’s are

$$\begin{aligned} \varDelta _1= & {} \frac{\delta _{(1)}}{n}, \text { } \varDelta _n = \prod _{j=1}^{n-1}\left( \frac{n-j}{n-j+1}\right) ^{\delta _{(j)}} \text { and}\\ \varDelta _j= & {} \prod _{k=1}^{j-1} \left( \frac{n-k}{n-k+1}\right) ^{\delta _{(k)}} - \prod _{k=1}^{j} \left( \frac{n-k}{n-k+1}\right) ^{\delta _{(k)}}\\= & {} \frac{\delta _{(j)}}{n-j+1} \prod _{k=1}^{j-1} \left( \frac{n-k}{n-k+1}\right) ^{\delta _{(k)}}, \ j=2,\dots ,n-1. \end{aligned}$$

In the full sample case \(\varDelta _j=1/n, j=1,\ldots ,n\).

Estimating G by \(G_n\) in (3), we propose the test statistic

$$\begin{aligned} S_{n,a} = n\int _{-\infty }^{\infty } \left| \sum _{j=1}^n \varDelta _j \Big [it\text {e}^{itY_j}+(1-\text {e}^{Y_j})\text {e}^{itY_j}\Big ]\right| ^2 w_a(t) \mathrm {d}t, \end{aligned}$$
(4)

where \(w_a(t)\) is a weight function containing a user-defined tuning parameter \(a>0\). The null hypothesis in (1) is rejected for large values of \(S_{n,a}\).

Straightforward algebra shows that, if \(w_a(t)=\text {e}^{-at^2}\), then the test statistic simplifies to

$$\begin{aligned} S_{n,a}^{(1)}&= n\sqrt{\frac{\pi }{a}} \sum _{j=1}^n\sum _{k=1}^n \varDelta _j\varDelta _k \text {e}^{-(Y_j-Y_k)^2/4a}\left\{ -\frac{1}{4a^2}\left( (Y_j-Y_k)^2-2a\right) \right. \\&+ \left. 2\left( 1-\text {e}^{Y_j}\right) \left( \frac{1}{2a}\right) \left( Y_j-Y_k\right) + \left( 1-\text {e}^{Y_j}\right) \left( 1-\text {e}^{Y_k}\right) \right\} , \end{aligned}$$

and if \(w_a(t)=\text {e}^{-a|t|}\) the test statistic has the following easily calculable form

$$\begin{aligned} S_{n,a}^{(2)}&= n\sum _{j=1}^n\sum _{k=1}^n \varDelta _j\varDelta _k \left\{ \frac{-4a\left( 3(Y_j-Y_k)^2-a^2\right) }{\left( (Y_j-Y_k)^2+a^2\right) ^3}\right. \\&\quad +\left. \frac{8a(Y_j-Y_k)\left( 1-\text {e}^{Y_j}\right) }{\left( (Y_j-Y_k)^2+a^2\right) ^2} + \frac{2a\left( 1-\text {e}^{Y_j}\right) \left( 1-\text {e}^{Y_k}\right) }{(Y_j-Y_k)^2+a^2} \right\} . \end{aligned}$$

New goodness-of-fit tests containing a tuning parameter are often accompanied by a recommended value of this parameter; this choice is typically based on the finite sample power performance of the test. Another approach which may be used is to choose the value of the tuning parameter data-dependently; see e.g., Allison and Santana (2015). In this paper, we opt to use the values recommended in the literature for tests containing a tuning parameter.

The weight functions specified above correspond to scaled Gaussian and Laplace kernels. These weight functions are popular choices found in the goodness-of-fit literature; see e.g., Meintanis and Iliopoulos (2003), Allison et al. (2017), Betsch and Ebner (2018), Betsch and Ebner (2019) as well as Henze and Visagie (2020). The popularity of these weight functions is, at least in part, due to the fact that their inclusion typically results in simple calculable forms for \(L^2\)-type statistics which do not require numerical integration. As an alternative to the weight functions used, one may employ a symmetric uniform kernel as a weight function; see e.g., Fernández et al. (2008). However, in the mentioned paper, the authors found that the computational time required for the test statistic obtained using the symmetric uniform kernel was substantial. This, coupled with the simple computational forms obtained using the weight functions defined above and the favourable power performance discussed in Sect. 3, motivated us to restrict our attention to the scaled Gaussian and Laplace kernels.

Although we do not derive the asymptotic results related to the proposed classes of test statistics we include some remarks in this regard. \(S_{n,a}\) is a characteristic function based weighted \(L^2\)-type statistic. The asymptotic properties of this class of statistics, in the complete sample case, are studied in detail in Feuerverger and Mureika (1977), while more recent references include Baringhaus and Henze (1988), Klar and Meintanis (2005) as well as Baringhaus et al. (2017). A convenient setting for the derivation of the asymptotic properties of these tests is the separable Hilbert space of square integrable functions. Typically, the asymptotic null distribution of \(S_{n,a}\) corresponds to that of \(\int _{-\infty }^{\infty } \left| Z(t)\right| ^2 w_a(t) \mathrm {d}t =: S_a\), where \(Z(\cdot )\) is a zero-mean Gaussian process. The distribution of \(S_a\) is the same as that of \(\sum _{j=1}^{\infty } \lambda _jU_j\), where \(U_j\) are i.i.d. chi-squared random variables with parameter 1 and where \(\lambda _j\) are eigenvalues of an integral operator (see e.g., Allison et al. (2021)). These tests are consistent against a large class of fixed alternative distributions. Additionally, these tests are frequently consitent against contiguous alternatives converging to the null at a rate of \(n^{-1/2}\).

In the case of random censoring, very little asymptotic results are available in the literature for test statistics of this type. Very recently, advances have been made in this regard; see e.g., Cuparić and Milošević (2021), in which a test for exponentiality is considered based on so-called inverse probability censoring weights, where the authors derive the asymptotic properties of the given test. In addition Fernández and Rivera (2020) studied Kaplan-Meier U- and V-statistics in order to derive some asymptotic results relating to the lifetime distribution. Some of these results may be helpful in deriving the asymptotic properties of the tests proposed in this paper in future research.

The null distribution of each of the test statistics considered depends on the unknown censoring distribution, even in the case of a simple hypothesis, see D’Agostino and Stephens (1986). Since we will not assume any known form of the censoring distribution, we propose the following parametric bootstrap algorithm to estimate the critical values of the tests.

  1. 1.

    Based on the pairs \((T_j,\delta _j), \ j=1,\dots ,n\) estimate \(\theta \) and \(\lambda \) by \(\hat{\theta }\) and \(\hat{\lambda }\), respectively, using maximum likelihood estimation.

  2. 2.

    Transform \(T_j\) to \(Y_j\) using the transformation in (2) for \(j=1,\dots ,n\).

  3. 3.

    Calculate the test statistic, say \(W_n := W(Y_1,\ldots ,Y_n;\delta _1,\ldots ,\delta _n)\).

  4. 4.

    Obtain a parametric bootstrap sample \(X_1^*, \dots , X_n^*\) by sampling from a Weibull distribution with parameters \(\hat{\theta }\) and \(\hat{\lambda }\).

  5. 5.

    Obtain a non-parametric bootstrap sample, \(C_1^*,\ldots ,C_n^*\), by sampling from the Kaplan-Meier estimate of the distribution of \(C_j^{(t)} = \hat{\theta }\left[ \log (C_j)-\log (\hat{\lambda })\right] \).

  6. 6.

    Set

    $$\begin{aligned} T_j^*=\text{ min }(X_j^*,C_j^*) \ \text{ and } \ \delta _j^*= {\left\{ \begin{array}{ll} 1, &{} \text {if}\ X_j^*\le C_j^* \\ 0, &{} \text {if}\ X_j^* > C_j^*. \end{array}\right. } \end{aligned}$$
  7. 7.

    Calculate \(\hat{\theta }^*\) and \(\hat{\lambda }^*\) based on \((T_j^*,\delta _j^*), \ j=1,\dots ,n\).

  8. 8.

    Obtain \(Y_j^* = \hat{\theta }^*\left[ \log (T_j^*)-\log (\hat{\lambda }^*)\right] ,\ j=1,\dots ,n\).

  9. 9.

    Based on the pairs \(\left( Y_j^*,\delta _j^*\right) , \ j=1,\dots ,n\), calculate the value of the test statistic, say \(W_n^* := W(Y_1^*,\ldots ,Y_n^*;\delta _1^*,\ldots ,\delta _n^*)\).

  10. 10.

    Repeat steps 4-9 B times to obtain \(W_1^*, \dots , W_B^*\). Obtain the order statistics, \(W_{(1)}^* \le \dots \le W_{(B)}^*\). The estimated critical value is then \(\hat{c}_n(\alpha ) = W_{\lfloor B(1-\alpha )\rfloor }^*\) where \(\lfloor c\rfloor \) denotes the floor of c.

The algorithm provided above is quite general and can easily be amended in order to test for any lifetime distribution in the presence of random censoring. In the absence of censoring there is no need to implement this algorithm; in this case, the critical values can be obtained via Monte Carlo simulation by sampling from any Weibull distribution and effecting the transformation discussed above.

3 Numerical results

In this section, we compare the power performances of the newly proposed tests to those of existing tests via a Monte Carlo simulation study. The existing tests used include the classical Kolmogorov-Smirnov (\(KS_n\)) and Cramér-von Mises (\(CM_n\)) tests. These tests have been modified for use with censored data, see Koziol and Green (1976). The test introduced in Liao and Shimokawa (1999) is considered in the case of full samples. A modification making this test suitable for use with censored data is proposed in Kim (2017); we denote the test statistic by \(LS_{n}\) in both the full sample and censored cases. The calculable forms of the test statistics mentioned above are

$$\begin{aligned} KS_n&= \max \left[ \max _{1\le j \le n} \left\{ G_n(Y_{(j)})-\left( 1-\text {e}^{-\text {e}^{Y_j}}\right) \right\} , \right. \\&\left. \max _{1\le j \le n} \left\{ \left( 1-\text {e}^{-\text {e}^{Y_j}}\right) -G_n^-(Y_{(j)})\right\} \right] ,\\ CM_n&= \frac{n}{3} + n\sum _{j=1}^{d+1} \left\{ G_n\left( X^{(t)}_{j-1}\right) \left( X^{(t)}_j-X^{(t)}_{j-1}\right) \right. \\&\left. \times \left[ G_n\left( X^{(t)}_{j-1}\right) -\left( X^{(t)}_j+X^{(t)}_{j-1}\right) \right] \right\} ,\\ LS_{n}&= \frac{1}{\sqrt{n}}\sum _{j=1}^n \left( \frac{\max \left[ j/n-G_n(Y_j),G_n(Y_j)-(j-1)/n\right] }{\sqrt{G_n(Y_j)\left[ 1-G_n(Y_j)\right] }}\right) . \end{aligned}$$

Krit (2014) proposes a test for the Weibull distribution in the full sample case. This test compares the empirical Laplace transform of the random variables resulting from the transformation in (2) to the Laplace transform of an EV(0, 1) random variable; \(\psi (t) = \varGamma (1-t)\) for \(t<1\). Let \(\psi _n\) be the empirical Laplace transform of the transformed observations, obtained using the Kaplan-Meier estimate of the distribution function;

$$\begin{aligned} \psi _n(t) = \int _{-\infty }^{\infty }\text {e}^{-tx} \text {d}G_n(x) = \sum _{j=1}^n \varDelta _j \text {e}^{-tY_j}. \end{aligned}$$
(5)

The resulting test statistic is

$$\begin{aligned} KR^*_{n} = n\int _{I} \left[ \psi _n(t)-\varGamma (1-t)\right] ^2 w_a(t) \mathrm {d}t, \end{aligned}$$
(6)

where \(w_a(t)= \text {e}^{at-\text {e}^{at}}\) is a weight function, a a user-specified tuning parameter and I some interval. Based on numerical considerations, Krit (2014) suggests that \(I=(-1,0]\) should be used. The quantity in (6) can be approximated by a Riemann sum;

$$\begin{aligned} KR_{n} = n\sum _{k=-m}^{-1}\left[ \sum _{j=1}^n \varDelta _j\text {e}^{-Y_jk/m}-\varGamma (1-k/m)\right] ^2 \text {e}^{ak/m-\text {e}^{ak/m}}, \end{aligned}$$

where m is the number of points at which the integrand is evaluated. In the numerical results shown below, we use \(a=-5\) and \(m=100\) as recommended in Krit (2014). In the full sample case, \(G_n\), in (5), is taken to be the empirical distribution function. Upon setting \(\varDelta _j=1/n\) we obtain the test statistic in Krit (2014).

For each of the tests considered above, the null hypothesis in (1) is rejected for large values of the test statistics.

3.1 Simulation setting

In the numerical results presented below, we use a nominal significance level of 10% throughout. Empirical powers are presented for sample sizes \(n=50\) and \(n=100\). The empirical powers for complete and censored samples are reported; censoring proportions of 10% and 20% are included. For each lifetime distribution considered, we report the powers obtained using three different censoring distributions. The first censoring distribution used is the exponential distribution, the parameter of which is chosen so as to obtain the specified level of censoring. The second censoring distribution used is the uniform distribution with support (0, m); again, m is chosen such that the required censoring level is achieved. The final censoring distribution used is the Koziol-Green model proposed in Koziol and Green (1976). Denote the survival function of a given lifetime distribution by S. In this case, the Koziol-Green censoring distribution, indexed by \(\beta \), has survival function \(S^\beta (t)\). It can be shown that the censoring proportion is \(\beta /(1+\beta )\). We chose the value of \(\beta \) so as to ensure that the required level of censoring is achieved. The alternative lifetime distributions considered are listed in Table 1. Note that each of the alternatives considered have the same support as that of the Weibull distribution with the exception of the beta distribution which is restricted to the unit interval (0, 1) (Tables 10, 11, 12 and 13).

Table 1 Density functions of the alternative distributions
Table 2 Estimated powers for the full sample case where \(n=50\)
Table 3 Heatmap of the estimated powers for the full sample case where \(n=50\)
Table 4 Estimated powers for the full sample case where \(n=100\)
Table 5 Heatmap of the estimated powers for the full sample case where \(n=100\)
Table 6 Estimated powers for \(10\%\) censoring for a sample size of \(n=50\) with three different censoring distributions
Table 7 Estimated powers for \(20\%\) censoring for a sample size of \(n=50\) with three different censoring distributions
Table 8 Estimated powers for \(10\%\) censoring for a sample size of \(n=100\) with three different censoring distributions
Table 9 Estimated powers for \(20\%\) censoring for a sample size of \(n=100\) with three different censoring distributions
Table 10 Survival times after leukemia diagnosis, in days
Table 11 p-values associated with the various tests used in the full sample case
Table 12 Initial remission times of leukemia patients, in days
Table 13 p-values associated with the various tests used in the censored case

The obtained empirical powers are presented in Tables 2, 3, 4, 5, 6, 7, 8 and 9. These tables report the percentages of 50 000 independent Monte Carlo samples that lead to the rejection of the null hypothesis, rounded to the nearest integer. For ease of comparison, the highest power in each line is printed in bold. Tables 2 and 4 contain the results relating to full samples. In order to ease visual comparison of the results obtained, we include so called ”heatmaps”, see Döring and Cramer (2019), of these results in Tables 3 and 5. For each test considered, Tables 6, 7, 8 and 9 show three empirical powers against each lifetime distribution, corresponding to the three different censoring distributions used. In each case, the results for the exponential, uniform and Koziol-Green models are shown in the first, second and third lines, respectively.

In order to reduce the computational cost associated with the numerical powers a warp-speed bootstrap procedure, see Giacomini et al. (2013), is employed. This methodology has been employed by a number of authors in the literature to compare Monte Carlo performances; see e.g., Meintanis et al. (2018), Allison et al. (2019) as well as Mijburgh and Visagie (2020). The bootstrap algorithm in Sect. 2 is implemented to calculate the critical values used to obtain the results in Tables 6, 7, 8 and 9.

For \(S_{n,a}^{(1)}\) and \(S_{n,a}^{(2)}\), we include numerical powers in the cases where a is set to \(1, \ 2\) and 5 in the full sample case and set to \(0.75, \ 1\) and 2 in the presence of censoring. The difference between these two sets of choices is due to the fact that the newly proposed tests exhibits higher powers in the presence of censoring if slightly smaller values of a are used. All calculations are performed in R, see [44]. The \(LindleyR \) package is used to generate samples from censored distributions, see Mazucheli et al. (2016). Parameter estimation is performed using the \(parmsurvfit \) package, see Jacobson et al. (2018), while the tables are produced using the \(Stargazer \) package, see Hlavac (2018).

3.2 Simulation results

First, we consider the results associated with the full sample case, given in Tables 2 and 4, together with the heatmaps shown in Tables 3 and 5. All of the tests considered attain the nominal size for both sample sizes used. The tests associated with the highest powers are \(S_{n,2}^{(1)}\) and \(S_{n,5}^{(1)}\), although \(LS_{n}\) and \(S_{n,5}^{(2)}\) also performs well. When analysing complete samples, we recommend using \(S_{n,2}^{(1)}\) or \(S_{n,5}^{(1)}\).

We now turn our attention to the powers achieved in the presence of censoring. The size of the tests are maintained closely for all sample sizes for censoring proportions of 10% and 20%, with the single exception of \(KR_{m,a}\) in the case of small sample sizes. As expected, the powers generally increase with sample size and decrease marginally as the censoring proportion increases.

Comparing the results associated with a sample size of 50, we see that \(S_{n,1}^{(1)}\), \(KR_{n}\) and \(LS_n\) generally tend to provide the highest powers. However, it should be noted that \(KR_{n}\) achieves very low power against certain alternatives; notably against the beta distributions considered when the censoring distribution is the Kozoil-Green model. When considering the empirical powers associated with samples of size 100, \(S_{n,2}^{(1)}\) exhibits the highest powers, followed by \(S_{n,1}^{(1)}\) and \(LS_n\).

When compiling the numerical results, we also considered a wider range of values for the tuning parameter a than those reported in the table. Although some power variation is evident when varying a, the powers achieved by the newly proposed classes of tests are not particularly sensitive to the choice of the tuning parameter a. However, based on the observed numerical powers, we recommend using \(S_{n,2}^{(1)}\) when testing the hypothesis in question in the presence of censoring.

4 Practical applications and conclusion

In this section, we apply the tests used in Sect. 3 to test the hypothesis in (1) based on two real-world data sets. The first data set, reported in Table 10, contains the survival times, in days, of 43 Leukemia patients. For a discussion of the original data set see Kotze and Johnson (1983) as well as Allison et al. (2017). This data set is not subject to censoring, i.e. all lifetimes are observed. The second data set contains the initial remission times of leukemia patients, in days; for more details see Lee and Wang (2003), this data set can be found in Table 12. These data contains censored observations, indicated using an asterisk. The original data were segmented into three treatment groups. However, Lee and Wang (2003) showed that the data do not display significant differences among the various treatments. As a result we treat the data as i.i.d. realisations from a single, censored, lifetime distribution. All reported p-values are estimated using one hunderd thousand bootstrap replications; these results are displayed in Tables 11 and 13, respectively.

From the results of the practical example in Table 11 it is clear that none of the tests reject the null hypothesis that the survival times after a leukemia diagnosis are Weibull distributed at the 5% or 10% levels of significance. As a result, we conclude that the Weibull distribution is an appropriate model for these data.

The results associated with the initial remission times, in Table 13, indicate that \(KS_n\), \(CM_n\) and \(LS_n\) reject the hypothesis in (1) at a 5% significance level. However, none of the remaining 7 tests considered result in a rejection of the null hypothesis at the 5% or 10% levels. We conclude that the Weibull distribution is likely to be an appropriate model for the observed times. The data set under consideration was also analysed in Bothma et al. (2020), where the null hypothesis of exponentiality of the remission time was strongly rejected. The mentioned paper recommended that a more flexible distribution be used when modelling these data. The results above indicate that the additional flexibility of the Weibull (compared to the exponential) distribution indeed ensures that the Weibull distribution is a more appropriate model than the exponential for the initial remission times considered.

A number of interesting numerical phenomena are evident when considering the powers of the various tests. It is clear that the achieved powers and, therefore, the null distribution of the test statistic, is influenced by the shape of the censoring distribution. The effect of the censoring distribution on the critical values of the tests seem not to have been investigated in the literature to date. Some authors perform goodness-of-fit testing by enforcing a parametric assumption on the censoring distribution, see e.g., Kim (2017). An additional consideration that seems to have been neglected in the literature is the effect on the null distribution of the test statistic, and hence the power of the test, of a specific assumption made in the Kaplan-Meier estimate of the distribution function. Some authors, in order to ensure that \(G_n\) satisfies the requirements of a distribution, defines \(G_n(t)=1\) for all \(t>X_{(n)}\) regardless of whether or not the sample maximum is censored. We are currently investigating these open questions.