Using Weighted Differences in Hazards as Effect Sizes for Survival Data

Sensitive to the change in sample sizes, traditional measures such as values of test statistics or p values can fail to quantify the difference in survival between populations for time-to-event data. We thereby propose to use effect sizes defined as the weighted differences in hazards to evaluate the survival difference between two groups. On the basis of the logrank test statistic, Gehan–Wilcoxon test statistic and Prentice–Wilcoxon test statistic, we developed three effect sizes that compare the survival experiences over the time period of investigation. Estimates of these three effect sizes were provided and their large sample behaviors were studied. In light of the Mann–Whitney parameter, we presented an effect size that compares the survival experiences over the entire possible/hypothetical time period. Two estimates of this effect size were constructed. We compared the proposed effect sizes and illuminated their use by theoretical studies, simulations and real cancer data. The effect sizes proposed in this article can help understand the survival difference in populations and are expected to have promising applications in the field of survival analysis.


Introduction
To report the difference (with respect to some characteristics, e.g., weight, blood pressure value) between two groups (populations), there is nothing more common than to conduct a hypothesis test and check the "statistical significance." However, along with a series of criticism [2,44], the conventional, dichotomous way of using the p value (e.g., ≤ 0.05 and > 0.05) has faced an unprecedented debate for its abuse and misuse. To that end, there is a need to find surrogate measures that can genuinely present findings from different studies in the hope of drawing better inferences.
It seems that the p value, even without being dichotomized, could not serve as such a measure of the difference between groups. This is mainly because the p value is influenced by the change in sample sizes and there is no simple relationship between the sample size and the p value that can be used to adjust the result. What measures would better describe the difference between groups?
A recommended alternative would be the effect size [2,44]. An effect size is a quantitative measure of the magnitude of a phenomenon and is unaffected by the change in sample sizes. When analyzing or mining data, effect sizes allow researchers to ensure that statistically significant effects inferred from a sample are substantial enough to be considered practically meaningful [27]. Certain effect sizes have been proposed for data types other than time-to-event, e.g., correlation coefficient, relative risk, Cohen's d [11], etc. For time-to-event data with censoring involved, there does not seem to have much research focusing on effect sizes that can be used to assess the difference in survival between groups.
An obvious effect size for time-to-event data is the hazard ratio from the Cox model [12]. However, it has long been known that the hazard ratio is a valid measure only under the proportional hazards (PH) assumption. Applying PH analysis to studies where PH does not hold can lead to misleading conclusions [37]. Even worse is that the PH assumption is rarely checked formally. The commonly used tests for examining the PH assumption, such as Grambsch-Therneau test, may not capture non-PH in studies with small sample sizes, but always lead to a rejection when sample sizes are large [42]. Alternatively, the PH assumption can be examined by using residual plots (e.g., plotting martingale or Schoenfeld residuals against time), by checking the predictortime interaction effect, or by visually looking at the survival curves. Note that these methods usually provide a subjective assessment. Therefore, it is important to find an informative estimate of the difference between two survival curves regardless of the PH assumption. In the current literature, there are two types of estimates toward this goal: the average hazard ratio (AHR) proposed by [38] and the restricted mean survival time (RMST) proposed by [37]. However, the interpretation of AHR can be difficult and misleading due to its complex definition [37]. And because it is difficult to select an appropriate period of time to compute RMST, the use of RMST seems to have been limited.
In this paper, we propose to use the weighted differences in hazards as effect sizes for studying the survival difference between two groups. The use of these proposed effect sizes do not impose any particular assumptions about the distributions of the failure time. They can be applied with or without the PH assumption. We will show that the commonly used logrank test statistic [30], Gehan-Wilcoxon test statistic [18], Prentice-Wilcoxon test statistic [35], as well as the well known Mann-Whitney parameter [29] can be exploited to develop such effect sizes.
This paper is organized in the following way. Section 2 introduces three test statistics-based effect sizes, estimates of the effect sizes and their large sample results. Section 3 introduces the Mann-Whitney parameter-based effect size and two estimates of the effect size. Section 4 presents a comparison among the proposed effect sizes. Section 5 partitions values of effect sizes to facilitate applications. Section 6 illustrates various aspects, including applications, of the proposed effect sizes by simulations and real cancer data. We conclude in Sect. 7.

Effect Sizes Based on Test Statistics
The logrank test is one of the most popular methods for testing the null hypothesis H 0 that there is no difference in the survival between two groups. The test is asymptotically fully efficient for testing equality of survival distributions in a proportional hazard family when no censoring is present or when the censoring distributions are the same for compared groups [28]. The original logrank test assigns an equal weight to each failure time. Gehan [18], Peto and Peto [34], Tharone and Ware [41], Prentice [35], Andersen [3] and Fleming and Harrington [17] modified the logrank test by assigning a different weight at each failure time so that the tests can be more sensitive to the difference in survival distributions in a particular time interval. Assigning varying weights over time to the logrank test produces weighted logrank tests that have been proven to be more efficient when hazards are non-proportional [16].
Suppose that two samples of survival data containing a total of n subjects are available, with sample i coming from group i. Let t 1 , . . . , t J be the distinct failure times in increasing order from the pooled sample. Let D i j and Y i j be, respectively, the number of subjects who failed and were at risk at t j in sample i (i = 1, 2; j = 1, 2, . . . , J ). Also let D j and Y j be, respectively, the total number of subjects who failed and were at risk at t j in both samples. Then, the general form of the weighted logrank test statistic, denoted by L w , is , where the weight w j = w n (t j ) for a positive weight function w n (·). L w includes as special cases many well-known test statistics, such as the logrank test statistic (weights w j = 1), Gehan-Wilcoxon test statistic (weights w j = Y j /n) and Prentice-Wilcoxon test statistic (weights w j = S(t j −) = k:k< j (1 − D k /Y k )) [16]. It is known that L w is close to zero when two survival functions are almost identical to each other and has a large absolute value when two survival functions are far apart. Therefore, L w could serve as a measure of the difference in survival between two groups. However, the absolute value of L w tends to increase (without bound) as the sample size increases. Consequently, when the data size increases, the p value will decrease to 0, indicating that with a large data size the p value can be non-informative in quantifying differences. Therefore, none of L w , |L w |, and the p value, computed using big samples, can provide an appropriate measure of the difference in survival. It is then of interest to find a statistic that can measure the survival difference and at the same time is "unaffected" by changes in sample sizes. Such a statistic is an estimate of an effect size and can be derived in light of the numerator of L w , as shown in this paper. The denominator, an estimate of the standard deviation of the numerator under the null hypothesis H 0 , will not be utilized in constructing estimates of effect sizes.
Along with our development, we will use several notations associated with the groups and samples. Let T denote the random variable of the actual failure time, C the random variable of the censoring time, X the minimum of T and C and Z the indicator of the group number with Z = i indicating group i (i = 1, 2). Suppose that in the study, the assignment of a subject to group i has a probability P i . For group i, let λ i (t), f i (t), S i (t) and S * i (t) denote, respectively, the hazard function, density function of the failure time, survival function of the failure time and censoring survival function (i.e., the survival function of the censoring time). We assume that in each group the censoring distribution and the survival distribution are continuous and independent of each other. The above quantities can be combined into The quantity π i (t) is the probability of being at risk (at time t) for subjects in group i. The following notations are needed for survival data. For sample i, let n i denote the total number of subjects under study, and Q i (t), N i (t) and Y i (t) represent, respectively, the number of subjects with observed or unobserved failure times larger than or equal to t, number of observed failures after time 0 up through and including time t and number of subjects at risk at time t. Note that Q i (t), N i (t) and Y i (t) depend on the sample size. To facilitate the calculation, we will use the convention 0/0 = 0.
Let U denote the numerator of L w in Eq. (1). We have Clearly, where Δt is the length of an infinitesimal interval of the time. Below we will show that the limit of n n 1 n 2 U (n → ∞) is a weighted difference (via integration) between two hazard functions of the two groups. This indicates n n 1 n 2 U could be used to estimate the effect size that measures the difference in survival experiences between group 1 and group 2.
Theorem 1 Suppose that w n (t) is a predictable process such that 0 ≤ |w n (t)| ≤ 1 for all t ∈ [0, ∞]. Also suppose that for almost all t, w n (t) converges in probability to a deterministic function ω(t). Then provided that the integrand is absolutely integrable.
)dt in Theorem 1 is a weighted difference in hazards with weight at time t equal to π 1 (t)π 2 (t) π(t) ω(t) and does not depend on the sample size. Therefore, it can serve as an effect size measuring the survival difference between two groups. In general, this effect size is unknown. Theorem 1 shows that an estimate of the effect size, i.e., n n 1 n 2 U , can be computed using the data at hand.
In many cases, it is straightforward to verify the conditions in Theorem 1. Let E S L , E S G , and E S P denote n n 1 n 2 U corresponding to the weight selection for the logrank, Gehan-Wilcoxon and Prentice-Wilcoxon test statistic, respectively. That is, Then it follows from Theorem 1 that the following results hold true.
The Corollary indicates that E S L , E S G , and E S P estimates, respectively, the following three effect sizes: and Clearly, all three integrals depend on the censoring distribution. The proof of the corollary shows that both E S G and E S P belong to the interval [−2, 2]. Actually, Sect. 4.2 will show E S G is a difference between two probabilities and thus lies inside the interval [−1, 1]. In comparison, there are no constants a and b such that any E S L lies inside the interval [a, b]. Note that E S L , E S G and E S P from the formulas can be negative and their magnitudes (i.e., the absolute values) are often the main interest in practical use. It follows from their definition that a positive (negative) effect size means a positive (negative) weighted difference in hazards between the two groups, which implies a "higher" ("lower") hazard in group 1 than in group 2. This information can be useful when comparing two groups. For instance, if group 1 represents the treatment and group 2 represents the control, then a positive E S L implies an adverse effect from the treatment.

Effect Sizes Based on the Mann-Whitney Parameter
The Mann-Whitney parameter P(T 1 > T 2 ) assesses the stochastic ordering between the distributions of two variables T 1 and T 2 . In this paper, T i is the variable of actual failure time in group i for i = 1, 2. If P(T 1 > T 2 ) = 0.5, then T 1 and T 2 are "statistically indifferent" to each other; if P(T 1 > T 2 ) > 0.5, then T 1 is "statistically preferred" to T 2 [33]. Therefore, P(T 1 > T 2 ) can serve as a measure of the difference between two distributions of T 1 and T 2 . Based on P(T 1 > T 2 ), we define as an effect size of the difference between two distributions. Note that E S MW ranges from −1 to 1, with its minimum −1 and maximum 1 corresponding to P(T 1 > T 2 ) = 1 and 0, respectively. As in the test statistic-based effect sizes, often the magnitude of E S MW is of interest for comparisons. In order to estimate E S MW , it suffices to estimate P(T 1 > T 2 ). When data are free of censoring, P(T 1 > T 2 ) can be estimated by the Mann-Whitney test statistic [29]. If censoring exists, P(T 1 > T 2 ) cannot be estimated by the Mann-Whitney test statistic since some actual failure times are unknown. In this case, an alternative way to estimate P(T 1 > T 2 ) is needed. Note that where S i and f i are, respectively, the survival function and density function for T i (i = 1, 2). Let S 1 and S 2 be the Kaplan-Meier estimates of S 1 and S 2 , respectively. Efron proposed to use D = − ∞ 0 S 1 (t)d S 2 (t) to estimate the right side of (9), assuming that the largest observation of each group is uncensored [15]. However, in practice it is very common that this assumption is not met. Without this assumption, using D to estimate P(T 1 > T 2 ) may cause trouble since the Kaplan-Meier estimates could be undefined for the potentially important tail of a survival function. Below we present two remedies, which overcome the drawback of D, to estimate the Mann-Whitney parameter. One approach is based on completing the estimated survival functions using exponential tails, and the other on the conditional estimation.

Estimating ES MW by Adding Exponential Tails
There are several methods to complete estimated survival functions by adding tails. Let S(t) be the Kaplan-Meier estimate of the survival function S(t). Efron suggested estimating S(t) by 0 beyond the largest observed study time t max [15]. Gill recommended estimating S(t) by S(t max ) beyond t max [20]. Moeschberger and Klein proposed estimating S(t) by a restricted Weibull survival curve that goes through two points (0, 1) and (t max , S(t max )) beyond t max [32]. Brown, Hollander and Kowar suggested estimating S(t) by an exponential curve going through (0, 1) and (t max , S(t max )) beyond t max [7]. We note that both methods from Efron and Gill are sensitive to heavy or administrative censoring. Adding Weibull tails could lead to crossing survival functions for some populations where survival functions are not supposed to cross each other. Adding exponential tails does not suffer from these drawbacks and hence is a preferred choice. We start this discussion by replacing the tail of a survival function with an exponential function. Let τ i be a time point such that for t > τ i , S i (t) is replaced by an exponential function determined by the exponential curve going through two points (0, 1) and (τ i , S i (τ i )) (i = 1, 2). That is, S i (t) = e −λ i t for t > τ i with λ i = − log(S i (τ i ))/τ i . Then the Mann-Whitney parameter is approximated by D e (τ 1 , Consequently, E S MW can be approximated by It is worth noting that an exponential tail leads to a constant hazard beyond τ i and two exponential tails do not intersect unless they coincide exactly. When survival data are available, we can use the Kaplan-Meier estimates S i (·) and Lebesgue-Stieltjes integration in the calculation of A, B, and C to obtain A, B, and C. Set D e (τ 1 , τ 2 ) = A + B + C. Then we can estimate E S MW by using 1 − 2 D e (τ 1 , τ 2 ). In reality, tied failure times occur due to rounding or common data collection practice/logistics (e.g., failure times are measured in months). As a result, it follows from the law of total probability that D e (τ 1 , τ 2 ) and D e (τ 2 , τ 1 ) based on a real dataset usually slightly underestimate P(T 1 > T 2 ) and P(T 2 > T 1 ), respectively. To make the adjustment, we add 1 2 1 − D e (τ 2 , τ 1 ) − D e (τ 1 , τ 2 ) to the term D e (τ 1 , τ 2 ) in 1 − 2 D e (τ 1 , τ 2 ) to obtain an estimate of E S MW , i.e., Equation (11) should be used when tied failure times are possible. To be safe, it is recommended that Eq. (11) be used always in practice.

Estimating ES MW Through Conditioning
Let τ be a preset time point before which we have the knowledge about S 1 and S 2 . Then we may approximate the Mann-Whitney parameter P(T 1 > T 2 ) by the conditional probability P( Consequently, E S MW can be approximated by When survival data are available, we can estimate using the Kaplan-Meier estimates S 1 (·) and S 2 (·). Similar to (11), we can obtain another estimate of E S MW with adjustment for ties:

Selection of 1 , 2 and
We can determine τ 1 , τ 2 for E S MWE (τ 1 , τ 2 ) and τ for E S MWC (τ ) by using the data. Let t max i be the largest observed time (either censoring or failure time) from group i (i = 1, 2). We can set τ 1 = τ 2 = τ = τ * , where τ * is less than or equal to min(t max 1 , t max 2 ). The time point τ * should be selected to avoid the unstable end (if any) of the Kaplan-Meier curves and, at the same time, as close as possible to min(t max 1 , t max 2 ) to make as much use as possible of the information from the Kaplan-Meier estimates. Such a setting of τ 1 , τ 2 , and τ will allow E S MWE (τ 1 , τ 2 ) and E S MWC (τ ) to produce reasonably good estimates of E S MW if the assumption of proportional hazards holds true. This is seen in the following theorem: and λ 2 (t) be the hazard functions for T 1 and T 2 , respectively. If there exists a constant r such that r = λ 1 (t) λ 2 (t) for any t > 0, then for any τ * > 0, we have where E S MW , E S MWE (τ 1 , τ 2 ) and E S MWC (τ ) are defined by Eqs. (8), (10) and (12), respectively.
Proof See "Appendix 3". Note that in many situations, survival functions can be modeled by distributions with proportional hazards. For example, if T 1 ∼ Weibull(α, β 1 ) and T 2 ∼ Weibull(α, β 2 ), α . Also note that the conclusion of the theorem may not hold if the assumption of proportional hazards is violated, as shown below. Let T 1 ∼ Weibull(2.5, 0.5) and T 2 ∼ Weibull(5, 0.5) denote variables of failure times in group 1 and group 2, respectively. The ratio of the two hazard functions (= 0. In light of Theorem 2, E S MWE (τ 1 = τ * , τ 2 = τ * ) and E S MWC (τ = τ * ) can estimate E S MW very well when the proportional hazards assumption holds true. It turns out they can also estimate E S MW well under the condition that S 1 (τ * ) and S 2 (τ * ) are close to 0. This is seen in the following theorem.

Comparing Effect Sizes
Two types of effect sizes have been developed: the effect sizes E S L , E S G and E S P based on test statistics, and the effect sizes E S MW based on the Mann-Whitney parameter. Various comparisons among these effect sizes are available.

Effect Sizes as Weighted Differences in Hazards
All four effect sizes represent a weighted difference between two hazard functions and share the same format of definition: It follows from (5), (6) and (7) that E S L , E S G and E S P (assuming S * The effect size E S MW corresponds to Formula (14) represents a weighted difference between two hazard functions with the weight at time t equal to ξ(t). The interpretation of the weight for each effect size is clear. For E S MW , the weight ξ MW (t) is the probability that the failure times of both groups are beyond t. For E S G , the weight ξ G (t) is the probability that the observed times (either failure time or censoring time) of both groups are beyond t. For E S L , the weight ξ L (t) is the ratio of the probability that the observed times of both groups are beyond t to the probability that the observed time of the combined group is beyond t. Note that ξ L (t) is essentially the weighted harmonic mean of π 1 (t) and π 2 (t). For is the ratio of the probability that the observed times of both groups are beyond t to the probability that the censoring time is beyond t. Note that ξ P (t) is in fact the geometric mean of ξ MW (t) and ξ G (t).
Weights can be compared under various conditions. For any censoring distributions, Assuming the same censoring distribution for both groups, i.e., S * Furthermore, if assuming the same light censoring distribution for both groups, we see that the weights ξ G , ξ P and ξ MW are usually close to each other; if assuming the same heavy censoring distribution for both groups, the weights ξ G , ξ P and ξ L are usually much smaller than the weight ξ MW .

What Do ES L , ES G and ES P Actually Measure?
The test statistic-based effect sizes E S L , E S G and E S P depend on censoring through the supports and sizes of the censoring survival functions. Supports of the censoring survival functions determine where to compute the weighted differences in hazards. Clearly, the formula (14) shows that these effect sizes are the weighted difference in hazard functions over the time period where censoring is possible. This indicates that these effect sizes capture the difference in hazards during the time period of study which is designed to compare the two groups. Sizes of censoring survival functions impact the magnitudes of the effect sizes. From (14), it is seen that small magnitudes of the effect sizes are produced if either S * 1 or S * 2 or both are small. To illustrate the above points, consider the scenario where an administrative censoring occurs at timet, i.e., the study is terminated at timet. We have It is seen that E S G computes the weighted difference in hazard functions before timet and thus E S G only compares the two groups before timet. Whent is smaller and thus censoring is heavier, E S G becomes smaller (assuming λ 1 (t) ≥ λ 2 (t)). Ast approaches 0, E S G goes to 0.
In practice, there are no special restrictions on the use of E S G . Attentions should be paid when using E S L and E S P . E S L is sensitive to the change in sample proportions and the use of E S P requires the equality of two censoring distributions.
To conclude, we note that E S G is a difference between two special probabilities. Let T i , C i and X i denote, respectively, the variable of failure time, the variable of censoring time and the minimum of T i and C i in group i (i = 1, 2). Then it is easy to . Clearly A is the probability that the observed time of a randomly selected subject from group 2 is longer than that of a randomly selected subject from group 1 who met the event, i.e., the probability that a randomly selected subject from group 2 can be observed to live longer than a randomly selected subject from group 1. A similar statement can be made for B. Therefore, E S G simply compares the failure times between groups   through comparing observed times. In contrast, compares the underlying failure times directly.

Practical Issues in Estimating ES
, has a clear interpretation. That is, any value of E S MW will immediately translate to the Mann-Whitney parameter P(T 1 > T 2 ). However, estimating E S MW can be problematic in practice. There may not be enough information to support estimating E S MW . Again consider the scenario where an administrative censoring presents so that the study terminates at time t =t. As a result, estimating E S MW could become impossible without additional information on T 1 and T 2 . This is illustrated in the following example. Consider two cases. For case 1 (Fig. 2a), And for case 2 (Fig. 2b), and 0 otherwise. Assume that in both cases, the only censoring is the administrative censoring uniformly distributed between 0 and t = 0.25. Then in either case, we will only be able to observe the survivals S 1 (t) and S 2 (t) up to timet, which are identical for both cases. Therefore, estimating E S MW using only the observed data in either case becomes problematic. In another word, if is proposed by Efron [15]. If none of the two Kaplan-Meier curves drop to 0 at the end, E S MW can be estimated by both E S MWE (τ 1 = τ * , τ 2 = τ * ) and E S MWC (τ = τ * ) when the proportional hazards assumption holds true (Theorem 2) or when S 1 (τ * ) and S 2 (τ * ) are close to 0 (Theorem 3).
If the Kaplan-Meier curve on S 1 drops to 0 at time t a , the other curve on S 2 stops at time t b ≥ t a but does not drop to 0, then it is easy to show E S MW can be estimated by 1 + 2 t a 0 S 1 (t)d S 2 (t). If the Kaplan-Meier curve on S 1 drops to 0 at time t a , the other curve on S 2 stops at time t b < t a but does not drop to 0, then E S MW can again be estimated by E S MWE (τ 1 = τ * , τ 2 = τ * ) and E S MWC (τ = τ * ) when the proportional hazards assumption holds true or when S 1 (τ * ) and S 2 (τ * ) are close to 0.
We note that E S MW may have no clinical meaning at all in some situations. For instance, consider two survival functions S 1 and S 2 whose Kaplan-Meier estimates S 1 and S 2 are close to 1 over a long study time. Then clinically, the difference between S 1 and S 2 could be ignored. However, E S MW between S 1 and S 2 can be large, say, close to 1, since E S MW = ∞ 0 S 2 (t)S 1 (t) (λ 1 (t) − λ 2 (t)) dt compares the difference in hazards over entire supports of survival functions. As such, a large E S MW could be clinically meaningless. This problem can occur with diseases that have a very optimistic survival, such as thyroid cancer. To resolve, test statistic-based effect sizes should be applied since they only compare the difference over the period of study time.

Relationship Among ES L , ES G , ES P , ES MW and Hazard Ratio
Examining the functions ξ(t) in (14) leads to the following straightforward comparisons among the values of E S L , E S G , E S P , E S MW and the hazard ratio. then the censoring survival is close to 1 so that E S G and E S P are close to E S MW . -The value of E S L is influenced by the probabilities by which subjects are assigned to groups. For instance, if π 1 (t) ≤ π 2 (t) and λ 1 (t) ≥ λ 2 (t) for all t ∈ [0, ∞], then E S L increases as P 1 increases. -There is a relationship between E S MW and the hazard ratio, a widely used effect size in survival analysis. Assume a constant hazard ratio r = λ 1 (t) λ 2 (t) . Then from Theorem 2, we have E S MW = 1 − 2P(T 1 > T 2 ) = r −1 r +1 . In particular, E S MW = 0 if r = 1 and E S MW → 1 (−1) as r → ∞ (0). In addition, by Taylor expansion, E S MW ≈ θ 2 for θ close to 0, where θ = log r .

Partition of Values of Effect Sizes
The proposed effect sizes can quantify the difference between two populations of time-to-event data. In many cases, one has to assess if an effect size is sufficiently large to be (practically) meaningful. For instance, in a clinical setting, one may need to evaluate the clinical meaningfulness of an effect size. As such, certain rules are needed in order to determine if an estimate of an effect size is small, medium or large. This involves partitioning the values of effect sizes. For demonstration, below we will develop such rules for effect sizes E S MW and E S G . The rule for E S MW can be derived in light of the widely used rule of thumb on the magnitude of Cohen's d. The natural log of the exponentially distributed survival time was considered because the transformed time has the same variance for all exponentially distributed times and thus Cohen's d has a clear interpretation [5]. Under the assumption that the failure time is exponentially distributed in each group, Cohen's d can be written as a function of the hazard ratio d = − log(r ) √ 6 π ("Appendix 5"), and hence small, medium and large hazard ratios correspond to values of 1.29, 1.90 and 2.79, respectively [5]. According to E S MW = r −1 r +1 , the relationship between the hazard ratio and effect size E S MW stated in Sect The same idea can be used to derive a rule for E S G . In addition to the assumption that failure times in the two groups are exponentially distributed, we assume censoring times in the two groups are exponentially distributed. Then E S G can be written as (see "Appendix 5" for a proof.) where d is Cohen's d and C R i = P(T i > C i ). Note that C R i is the censoring rate in group i, which can be estimated by the observed censoring rate C R i . Using the above formula, we can obtain a rule of thumb regarding when the effect size E S G is small, medium and large. Specifically, for any fixed C R i , the small, medium and large effect sizes E S G are computed by using the corresponding values of Cohen's d (small, 0.2; medium, 0.5; large, 0.8). Table 1 shows a list of small, medium and large effect sizes E S G for selected C R i . Using the midpoints of numbers in any triplet in the table, intervals for small, medium and large effect sizes can be easily obtained. The above rules for E S MW and E S G are developed under the assumption that actual failure/censoring times follow an exponential distribution. In cases where this assumption does not hold, the rules provide a simple and useful reference.

Example 1: Simulations
This example presents a simulation study on five effect size estimates: E S L , E S G , E S P , E S MWE and E S MWC , defined in (2), (3), (4), (11) and (13), respectively. It illustrates various elements developed in previous sections and shows that with large samples, differences in survival between groups can be appropriately described by effect sizes. The experiment is outlined as follows. First, we chose the two-parameter Weibull distributions as the underlying distributions for the failure and censoring times in groups 1, 2 and 3. We used T 1 ∼ Weibull(2.5, 0.4), T 2 ∼ Weibull(2.5, 0.5) and T 3 ∼ Weibull(2.5, 0.6) for the failure time in group 1, group 2 and group 3, respectively. Then the proportional hazards assumption holds between groups 1 and 2 and between groups 1 and 3, since λ 1 (t) λ 2 (t) ≡ 1.75 and λ 1 (t) λ 3 (t) ≡ 2.76, where λ 1 , λ 2 and λ 3 are hazard functions in groups 1, 2 and 3, respectively. The censoring times in all three groups were chosen to follow a Weibull(10, 0.6) distribution. The survival functions of the failure and censoring times are shown in Fig. 3. (Note that the censoring survival function has a value smaller than 1 × 10 −4 at time t = 0.75, so roughly the study was terminated at t = 0.75.) Second, survival data were generated, with sample i corresponding to group i (i = 1, 2, 3). And we increased the sample size for samples 1 and 2, denoted by n 1 and n 2 , respectively, from 1000 to 20,000 at the same rate, while fixing the sample size for sample 3, denoted by n 3 , at 2000.
Our goal was to compare the survival differences between group 1 and group 2 and between group 1 and group 3. We recorded the values of E S L , E S G , E S P , E S MWE and E S MWC when evaluating the survival difference between group 1 and group 2 and between group 1 and group 3. Here for E S MWE (τ 1 , τ 2 ) in (11) and E S MWC (τ ) in (13), we chose τ 1 = τ 2 = τ to be 0.6, since all Kaplan-Meier curves are stable before time t = 0.6. The simulation results are summarized in Table 2.
Computation through Eq. (14) shows that values of E S L , E S G , E S P and E S MW between group 1 and 2 are 0.44, 0.26, 0.26 and 0.27, respectively, and that values of (Note that E S MW = r −1 r +1 , the relationship between E S MW and the hazard ratio r stated in Sect. 4.4, can be easily verified between group 1 and group 2 and between group 1 and group 3.) Here in computing E S L , we used the proportion of subjects in the sample to approximate the probability of assignment of subjects to the corresponding group. E S L between group 1 and group 3 could not be computed due to the change in sample proportions. The censoring rates in groups 1, 2 and 3 are 11%, 26% and 42%, respectively. Table 2 shows that except for the E S L column between groups 1 and 3, values of E S L , E S G , E S P , E S MWE and E S MWC are, respectively, close to the values of E S L , E S G , E S P , E S MW and E S MW . This illustrates how estimates of effect sizes developed in Sects. 2 and 3 approximate their corresponding effect sizes. Note that estimates of effect sizes are not much influenced by the change in sample sizes, except for E S L evaluated between groups 1 and 3. Table 2 also shows that for each of E S L , E S G , E S P , E S MWE , and E S MWC , the value obtained between groups 1 and 2 is smaller than that between groups 1 and 3. In particular, all of the values of E S MWE , E S MWC and E S G indicate a medium effect between group 1 and group 2 but a large effect between group 1 and group 3. Therefore, all five types of effect size estimates support the fact that the survival difference between groups 1 and 2 is smaller than the difference between groups 1 and 3 (Fig. 3). These results are consistent with the results of the modified Cohen's d. The modified Cohen's d applies to cases of unequal variances and is defined as the ratio of the difference in means to the square root of the mean of the two variances [11]. Both Cohen's d and modified Cohen's d use the same conventional definition of small, medium and large values of d. The modified Cohen's d for the survival times in this example is −0.516 (medium) between Group 1 and Group 2, and is −0.917 (large) between Group 1 and Group 3.
To conclude this example, we make the following remark. One may wish to employ statistical tests to detect significant differences between groups. When doing this by simulations, we note that as n 1 and n 2 are large (say, above 15,000), p values of tests of logrank, Gehan-Wilcoxon and Prentice-Wilcoxon between group 1 and group 2 will be smaller than those between group 1 and group 3, indicating that the difference between group 1 and group 2 is more significant than the difference between group 1 and group 3. This is contrary to the conclusion from the proposed effect sizes and clearly against the truth shown in Fig. 3. Therefore, reporting only p values, which depend on sample sizes, may not be sufficient when differentiating differences between populations.

Example 2: Gastric Carcinoma Data
The proposed effect sizes can be applied no matter if hazards are proportional. Here we present one example with non-proportional hazards, in which our proposed effect sizes can be used to detect and quantify the difference between two groups, while the approach of the hazard ratio from the Cox regression fails to do so.
The Gastrointestinal Tumor Study Group studied survival of 90 patients with locally advanced, non-resectable gastric carcinoma by comparing two treatments: Fig. 4 Kaplan-Meier curves of failure times based on the gastric carcinoma data chemotherapy and radiation, and chemotherapy alone [39]. The aim was to determine if radiation was beneficial to the patients under chemotherapy. 45 and 45 patients were randomly assigned to the groups with and without radiation, respectively. The survival data are publicly available [39]. The Kaplan-Meier curves of the two groups are shown in Fig. 4, which indicates an overall negative impact of radiation on the survival of patients. Note that the observed censoring rates for both groups are low: 18% observed in each group. For convenience, the groups with and without radiation will be referenced as group 1 and group 2, respectively.
The Grambsch-Therneau test for examining the proportional hazard assumption gives a significant p value of 0.0033, suggesting that the hazard ratio would not be an appropriate effect size for analyzing this problem. In fact, the estimate of the hazard ratio (group 1 over group 2) from the Cox regression gives a value of 1.30 with a wide confidence interval (CI) (95% CI: 0.83 to 2.06). These estimates are not very informative when assessing if radiation improves survival.
In comparison, our proposed effect sizes can capture the adverse effect of radiation. Between the groups with and without radiation, E S L , E S G , E S P , E S MWE , and E S MWC , are, respectively, 0.22 (95% CI, -0.16 to 0.59), 0.27 (95% CI, 0.04 to 0.51), 0.26 (95% CI, 0.03 to 0.50), 0.26 (95% CI, 0.03 to 0.51) and 0.26 (95% CI, 0.03 to 0.51). (Hereafter, the CIs of the effect sizes are the bootstrap confidence intervals [14], based on 100,000 bootstrap samples.) Note that in computing E S MWE and E S MWC we used τ 1 = τ 2 = τ = τ * = min(t max 1 , t max 2 ) = 1472, since the Kaplan-Meier curves are relatively stable for the entire study period. These two estimates can be used for the example (with non-proportional hazards) since 2 S 1 (τ * ) × S 2 (τ * ) = 2 × 0.133 × 0.125 ≈ 0.03, which is small (Theorem 3). From the above estimates, several notes follow below. First, except for E S L , all other four effect sizes and their CIs show that radiation decreases the survival, which is consistent with the observation in Fig. 4. Second, estimates of E S MW and E S G suggest a medium effect size (Sect. 5). Third, because of the relatively low censoring rate (18% observed in each group), the estimates of effect sizes based on the test statistics and the Mann-Whitney parameter are close to each other except for E S L (Sect. 4.4).

Example 3: SEER Thyroid Cancer Data
The tumor, nodal involvement, metastasis (TNM) staging model provides the worldwide standard for cancer prognosis and treatment recommendations [1]. Due to the accumulation of time-to-event data with current advances in cancer research, periodic updates of the TNM are necessary for the improvement of the delivery of patient care. However, such updates often involve expert panels and can be cumbersome when one has to deal with big data. The Ensemble Algorithm for Clustering Cancer Data (EACCD) [8,25] is a machine learning procedure designed specifically to "automatically" update the TNM or its expanded models that involve additional factor(s). This algorithm computes survival differences between groups of patients and utilizes these differences to stratify patients. Below we demonstrate an application of EACCD, integrated with effect sizes, to update and improve the current thyroid cancer staging modeling. 51,291 patients of papillary and follicular thyroid cancer were selected from the SEER 18 databases [40]. Each patient was diagnosed sometime between 2004 and 2010 and was followed up until the year 2015. Cause-specific deaths and survival times (in months) were recorded. In addition, each patient provided information for the following four variables: tumor size (T), regional lymph nodes (N), status of distant metastasis (M) and age (A). All four variables are categorical with 7 levels for T: T0, T1, T2, T2, T3, T4a, T4b; 3 levels for N: N0, N1a, N1b; 2 levels for M: M0 M1; and 2 levels for A: A1 (age < 55), A2 (age ≥ 55). Clinically, it is useful to consider combinations of patients. A combination is defined as a subset of the data corresponding to one level of each factor and is conveniently written using levels of factors (e.g., T2N0M0A2 denotes a subset of patients with T=T2, N=N0, M=M0, A=A2). (Note that the notation of a combination implicitly represents the corresponding underlying population/group.) The dataset contains 39 combinations each including at least 25 patients. Refer to the study by Yang et al for specific data management [46].
The goal of applying EACCD is to partition these 39 combinations into g clinically meaningful clusters such that 1) patients from the same cluster have a clinically similar survival while patients from two distinct clusters have clinically different survivals; and 2) g is as small as possible while the set of g clusters can provide the maximum possible accuracy in survival prediction. After such g clusters are obtained, one can study the survival of patients in each cluster and predict the survival for new patients by using survivals from clusters. The merit of applying EACCD is that a large number of original combinations (as often it is) is aggregated into a small number of more manageable clusters that have approximately the same survival prediction accuracy as the original set of combinations [25].

EACCD consists of 3 main steps.
Step 1 defines the initial dissimilarity between survival functions of any two groups associated with two combinations. Step 2 computes learned dissimilarities by using initial dissimilarities.
Step 3 clusters the combinations by the learned dissimilarities and a linkage method. The clustering result is summarized into a dendrogram which will then be cut to obtain g clusters. Algorithm 1 is one version of the algorithm that utilizes the two-phase Partitioning Around Medoids algorithm (PAM) [26]. Critical to the application of EACCD is the definition of initial dissimilarities, which can largely affect the output of EACCD [43]. To find an appropriate measure of the initial dissimilarity between two groups associated with two combinations of the thyroid cancer data, we first note that the sample sizes of combinations vary a lot. For instance, T1N0M0A1 has 17,747 cases while T4bN1bM1A2 has only 40 cases. As shown in Example 1, an effect size should serve as a good choice of the initial dissimilarity. In addition, thyroid cancer has a very good prognosis and there are many combinations that have similar and very optimistic survivals before the time of administrative censoring (12 years). According to Sect. 4.3, the test statistic-based effect sizes would serve as better initial dissimilarities than the Mann-Whitney-based effect size.

Algorithm 1 Ensemble Algorithm for Clustering Cancer Data
When applying EACCD to the thyroid dataset (the number of combinations m = 39), the equal weights w 1 = · · · = w m = 1/m and the absolute values of 5 estimates ( E S L , E S G , E S P , E S MWE and E S MWC ) as initial dissimilarities were used in step 1 and the complete linkage [23] was employed in step 3. To compute the Mann-Whitney-based effect sizes, we first note that all Kaplan-Meier curves are stable before min 1≤k≤m {t max k }, where t max k is the largest time at which the Kaplan-Meier estimator is defined in Com k , and therefore, for simplicity, we set τ 1 = τ 2 = τ = τ * = min 1≤k≤m {t max k } for comparing any two combinations. Figures 5 and 6 show the results based on the test statistic and Mann-Whitney effect sizes, respectively. The left columns of both figures list dendrograms. Each red box represents cutting the dendrogram that corresponds to the optimal C-index [22,25]. Note that g, the number of clusters from cutting, can change when different effect sizes are used. The right columns show the thyroid cancer-specific survival using the Kaplan-Meier estimates. These survival curves allow us to visually evaluate the clusters. Figure 6b and d, based on the effect size E S MW , shows that several high curves have a similar survival, indicating the clusters are not well separated and thus form a poor partition of the patients. As explained in Sect. 4.3, E S MW tends to assign a big value of effect size to the difference between two groups (combinations) that share similar and optimistic survival experiences during the time of follow-up, because it compares not only the hazard difference over the period of study time but also the difference after the study. This tendency causes EACCD to partition combinations with similar and high survivals into different clusters, which actually do not have clinically meaningful differences in survival as their survival curves are very similar to each other. By contrast, Fig. 5b, d, and f suggests all E S L , E S G , and E S P lead to a meaningful partition of data. The above example shows an application of the effect sizes in upgrading the staging system for thyroid cancer. There exist related works in the literature. The hazard ratiobased effect size and Mann-Whitney parameter-based effect size have been applied to expand the staging system for breast cancer and colorectal cancer [24,25].

Conclusion
This paper investigates effect sizes that are used to assess the difference in survival between two groups. It studies three test statistic-based effect sizes E S L , E S G and E S P and one Mann-Whitney-based effect size E S MW , all defined as weighted differences in hazards. The effect sizes E S L , E S G and E S P compare the survival experiences between two groups over the time period of investigation, while E S MW compares the survival experiences over the entire possible time period. Effect sizes E S L and E S G , estimated by E S L and E S G , can be applied to any data, while the use of E S P , estimated by E S P , assumes equal censoring distributions. Although E S MW has a clear interpretation, it may not be estimable or clinically meaningful. If the Kaplan-Meier curve that stops first drops to 0, E S MW can be estimated directly using the Kaplan-Meier estimates of the survival functions of the two groups. If the Kaplan-Meier curve that stops first does not drop to 0, E S MW can be estimated using both E S MWE and E S MWC under the assumption of proportional hazards between the two groups or under the condition that both curves are low for time close to when they stop.
Effect sizes proposed in this paper have direct applications to big survival data. An estimate of an effect size, computed using two large samples, quantifies the difference between two underlying populations. This is in contrast to checking the p value from a statistical test, which may not provide any insight about the size of the survival difference and could cause misunderstanding.
Effect sizes are particularly useful when repeated comparisons of survival differences are required in a study. For instance, when updating or expanding the TNM staging system for a cancer, assessing the survival difference is needed for many pairs of groups [1]. In this case, effect sizes can be used to differentiate the survival between two groups, so that two groups can be merged if the associated effect size is small. Survival tree modeling [6] represents another scenario that requires a sequential comparison of survival for various pairs of groups. The key element in survival trees is their splitting rules, which can be determined by measuring the dissimilarities between possible nodes and finding the separation that leads to most dissimilar nodes. The largest significant value from test statistics (e.g., logrank, Gehan-Wilcoxon) is often used to suggest the "best" separation, based on the idea that the more extreme the statistic, the more dissimilar the two nodes [9,10]. As seen in previous sections of this article, effect sizes seem to be better choices for measuring dissimilarities between nodes and their use is expected to improve the performance of survival trees.
Hazard ratio, the traditionally used effect size, requires the assumption of proportional hazards. With non-proportional hazards, hazard ratios can fail to detect and quantify the difference between two groups. However, the proposed effect sizes in this paper can be applied no matter if hazards are proportional. This is important in the settings of clinical trials. At the design and planning stages of a study, there is usually no information about whether or not two groups have proportional hazards, so that the assumption of proportional hazards should not be made. The proposed effect sizes, defined as the weighted differences in hazards, would be more reasonable to be used along with the survival time end point for the study than the hazard ratio.
We have defined the effect sizes and studied their estimates for comparing the survival difference between groups. The use of these estimates is straightforward for problems where variable adjustment is not needed. For situations where adjustment needs to be made with important variables, we propose to test the existing adjustment methods, such as matching, stratification and inverse probability weighting [45]. This constitutes an interesting future study. We have not discussed potential analytical formulas for the variances of the proposed estimators. With availability of these formulas, one can investigate not only interval estimation of effect sizes but also the minimum sample sizes required for using the estimators. Finding theoretical formulas for the variances is of interest and deserves further research.

Proof of Theorem 1
Before we prove Theorem 1, we need three propositions.
Also suppose that for all δ > 0, there exists k δ with Proposition 2 (Daniels' "in probability linear bound") [4,13] Let K n be the empirical distribution function based on a random sample of size n from the continuous (cumulative) distribution function K . Then, for any δ ∈ (0, 1), Proposition 3 (Lenglart-Rebolledo inequality) [4,36] Let N be a counting process, and M = N − A the corresponding local square integrable martingale. Suppose H is a predictable and locally bounded process. Then for any stopping time T (≤ ∞) and any , η > 0, where M, M is the compensator for M 2 .
With these propositions, we can prove Theorem 1. Rewrite where the last equation is an application of the Lebesgue-Stieltjes integration. Letting we have n According to Doob-Meyer decomposition theorem [31], N i (t) can be decomposed into a martingale M i (t) and a compensator A i (t) = t 0 λ i (s)Y i (s)ds, i = 1, 2. Therefore, The proof of the theorem consists of two steps.
Step 1 We show n P → 0 as n → ∞. We begin with the integral and will use Proposition 1 to show First, by the fact that n i.e., Second, we note that (n i − Q i (t))/n i is the value, at time t− (time just before t), of the empirical distribution function from the cumulative distribution function of the failure time in group i. According to Proposition 2, for any δ ∈ (0, 1), we have, Third, by Bonferroni inequality, we see and correspondingly lim inf Hence, from where k δ (t) = 1 P 2 δ > 0. By Eqs. (18), (20), (22) and Proposition 1, we see (19) is established.
We are now in a position to show n P → 0. In fact, from Eq. (19) and the fact that |w n (t)| ≤ 1 and thus is the compensator of the counting process N i (t) with A i (0) = 0 a.s. and A i (t) < ∞ a.s. for any t, M i = N i − A i is a local square integrable martingale [16]. In addition, since H i (t) is a left-continuous process and w n (t)H i (t) = 1 n 1 + 1 is a predictable and bounded process [16]. Moreover, [16]. Using Proposition 3, we have, for any , η > 0, Therefore, Step 2 We show n P → ∞ 0 Again the proof is done by taking advantage of Proposition 1. Rewrite n as By the law of large numbers, we have Y 1 (t) n p → P(X ≥ t) = π(t), and hence, due to the assumption w n (t) P → ω(t), the integrand in n has the following probability limit: To obtain the lower bound of lim inf n→∞ , we first observe (Y 1 (t)/n 1 )(Y 2 (t)/n 2 ) (Y 1 (t) + Y 2 (t))/n w n (t)(λ 1 (t) − λ 2 (t)) ≤ n n 2 Y 1 (t) n 1 λ 1 (t) + n n 1 Y 2 (t) n 2 λ 2 (t).