Misclassification of current status data

We describe a simple method for nonparametric estimation of a distribution function based on current status data where observations of current status information are subject to misclassification. Nonparametric maximum likelihood techniques lead to use of a straightforward set of adjustments to the familiar pool-adjacent-violators estimator used when misclassification is assumed absent. The methods consider alternative misclassification models and are extended to regression models for the underlying survival time. The ideas are motivated by and applied to an example on human papilloma virus (HPV) infection status of a sample of women examined in San Francisco.


Introduction
Current status data provides information on the survival status of individuals at various times rather than standard observation, possibly right-censored, of failure times. Considerable attention has been given to estimation of a survival function based on such data, and estimation of regression coefficients from a variety of standard models. Earliest work was motivated by applications in demography (Diamond et al. 1986) and epidemiology (Becker 1989), followed by carcinogenicity studies, partner studies of Human Immunodeficiency Virus (HIV) transmission (Shiboski and Jewell 1992), age-incidence estimation, and assessment of environmental exposures (Keiding 1991). Nonparametric estimation in the single-sample setting is based on the well-known pool-adjacent-violators algorithm of Ayer et al. (1955). Regression analyses have largely employed techniques from generalized linear models for the current status outcome and variants of generalized additive models (Shiboski 1998). A brief review and description of some current open problems can be found in Jewell and van der Laan (2004).
In many of these applications, ascertainment of an individual's current status is based on a screening test which may not have perfect sensitivity and specificity. For example, tests for the infection status of a viral disease like HIV or HPV are designed to detect antibodies and may be subject to error particularly when a test is performed soon after infection. Detection of the existence of uterine fibroids through ultrasounds (Young et al. 2008) is known to be subject to error. When current status is measured through a survey instrument, such as studies of the age at onset of menopause (Grummer-Straun 1993;Jewell et al. 2003), there is potential for misclassification particularly close to the (unobserved) event time, menopause in this specific example.
We extend the nonparametric maximum likelihood estimator of the distribution function underlying current status data when there is no misclassification to allow for time-independent misclassification of both apparent "survivors" and "failures" with known misclassification rates. Calculation of the proposed estimator uses a simple modification of the pool-adjacent-violators algorithm. Asymptotic properties therefore follow straightforwardly. We consider the implication of misclassification rates that vary over time, in particular when misclassification only occurs in a known time window surrounding the underlying failure event. We also consider regression models for current status data subject to misclassification, using the ideas for binary generalized linear models with outcome subject to misclassification (Neuhaus 1999).

Nonparametric estimation of a single distribution function
We assume the standard data structure for current status data with the following notation. Let T be the survival time random variable of interest with distribution function F, with the monitoring time denoted by the random variable C. As usual, we assume that C is independent of T; in some examples, C is non-random. In either case, we focus directly on the conditional likelihood, given C. For convenience we describe the random monitoring time scenario, where current status observation refers to a sampling scheme where n i.i.d. observations are collected on the random variable (Y, C) where Y = I(T ≤ C).
Motivated by the examples discussed in the introduction we now consider the possibility that the random variable Y is observed with error. We focus primarily on the following constant misclassification model although we discuss alternative error models in Sect. 2.3. Assume that instead of observing Y we observe the random variable Δ where where c i is the observed value of C i and P Δ i = 1 | c i = P Δ i = 1 | y i = 1, c i P y i = 1 | c i + P Δ i = 1 | y i = 0, c i P y i = 0 | c i = (α − 1 + β)F c i + 1 − β, and P Δ i = 0 | c i = P Δ i = 0 | y i = 0, c i P y i = 0 | c i + P Δ i = 0 | y i = 1, c i P y i = 1 | c i = β − (α − 1 + β)F c i .
For ease of notation let γ = α + β − 1 > 0. Then the (conditional) likelihood function allowing for misclassification in the response variable can be written as; with corresponding log-likelihood: Writing G(c i ) ≡ γF(c i ) + (1 − β), then the nonparametric maximum likelihood estimate of the distribution function when the current status outcomes are subject to misclassification can be found by obtaining a vector z = z 1 = G c 1 , …, z n = G c n ∈ R n maximizing under the constraint Note that G is itself a distribution function.
Claim The identity z m = min max z m , 1 − β , α , m = 1, …, n, defines the unique vector, z = z 1 , z 2 , …, z n ∈ ℝ n maximizing (2) under constraint (3), with G(c i ) replaced by z i , where is the unconstrained nonparametric maximum likelihood estimate (NPMLE) of the distribution function G based on the likelihood (2) but with no additional constraint (3).
Note that the vector z m : m = 1, … , n can be computed using the standard pool-adjacentviolators algorithm, originally described by Ayer et al. (1955) and characterised by Barlow et al. (1972) and Groeneboom and Wellner (1992) in terms of convex minorants. The vector {z m } modifies any value of z m less than 1 − β to equal 1 − β, and similarly modifies any value of z m greater than α to equal α. The NPMLE of F at a monitoring time c i then follows using the relationship F c i = G c i − 1 + β /γ.
Proof of Claim First note that, if δ i = 0 for i = 1, 2, …, k, then maximizing (2) requires the second term to be as large as possible, in which case, we set z 1 , z 2 , …, z k = 1 − β without affecting the maximization problem over the remaining z k+1 , …, z n . Similarly, if δ i = 1 for j ≤ i ≤ n, then to maximize (2) we make the first term as large as possible, setting z j , z j+1 , …, z n = α.
Suppose there exists at least one δ i = 1 followed by a δ j = 0, for some j > i (otherwise we are done).
Let k 0 be the smallest index i such that δ i = 1, and let k 1 be the smallest index k ≥ k 0 such Analogously, let m 0 be the largest index k ≥ k 1 such that with m 1 being the largest index i such that δ i = 0.
Thus, k 0 and m 1 represent the index of the first δ i = 1 and the last δ j = 0 respectively, where j > i. Also, k 1 − 1 is the smallest index for which the unconstrained NPMLE does not fall below 1 − β, and m 0 is the largest index for which the unconstrained NPMLE does not go above α. Figure 1 shows the positioning of such indices as they would appear in terms of a hypothetical unconstrained NPMLE of a distribution function. The dashed lines are positioned at 1 − β and α, between which the constrained NPMLE must lie. □ Using these definitions, the claim can be written as;

B.
For all indices m > m 0 , z m = α

C.
For all indices k 1 ≤ m ≤ m 0 ; z m = max i ≤ m min k ≥ m ∑ i ≤ j ≤ k δ i k − i + 1 , the unconstrained NPMLE. We prove the claim by establishing each statement separately. First, we show that for all indices m < k 1 , z m = 1 − β maximizes the relevant terms in the likelihood (2), subject to the constraint (3) without affecting the optimization function, or constraint, based on z i for other indices. Consider indices i for k 0 ≤ i < k 1 . Suppose the values of z i over this range of indices take values that are increasing and, necessarily, ≥1 − β. Consider the largest of these indices (just to the "left" of k 1 ) where the proposed maximizer values of z i assume the value 1 − β + ϵ where ϵ > 0. It does not matter here whether z i assumes this value at one or over a set, S of consecutive indices. Assume that amongst the set of indices, S, there are p indices i where δ i = 1 and q indices where δ i = 0. The contribution to the likelihood (2) over this set of indices is therefore p log(1 − β + ϵ) + q log(β − ϵ) ≡ h(ϵ), say. The derivative of this function is h′( . Now, by the definition of k 1 relative to the definition of the unconstrained NPMLE, it follows that p/(p + q) < 1 − β that in turn implies that q/p > β/(1 − β). Since ϵ > 0, β/(1 − β) > (β − ϵ)/(1 − β + ϵ), and it then follows that h′(ϵ) < 0 so that h is decreasing in ϵ. Thus without changing the optimization problem in terms of the other indices and constraints, we can increase the likelihood by lowering the value of the proposed z i to the next lower value (to the right) where z j = 1 − β + λ where 0 < λ < ϵ. However, we can now repeat the same argument in terms of λ, and thus we keep lowering the relevant z i 's until they all equal 1 − β. This proves (A). An identical argument also establishes (B). The statement (C) follows since z m = max i ≤ m min k ≥ m ∑ i ≤ j ≤ k δ i k − i + 1 is already the unconstrained NMPLE and meets the constraints (3) by definition of k 1 and m 0 .
The claim is thus proven.

Pointwise confidence intervals for the NPMLE
There is by now a growing literature on the non-standard asymptotic properties of the standard NPMLE of F for current status data with no misclassification. There is a slower rate of convergence (n 1/3 as opposed to the familiar n rate), and the limit distribution is not Gaussian (Groeneboom and Wellner 1992). We conjecture straightforward extensions of these results for the NPMLE for misclassified current status data. Thus it is not appropriate to focus on the (asymptotic) variance of the NPMLE based on any form of current status data as a step towards confidence interval construction. For pointwise confidence intervals for F, various approaches have been developed for standard current status data (Banerjee and Wellner 2005). Suggested techniques include the likelihood-ratio method (Banerjee and Wellner 2001), an approach that can presumably also be adapted to allow for misclassification.
In general, the standard bootstrap yields inconsistent estimates of pointwise confidence intervals whether data is sampled with replacement from the original data or generated from the NPMLE estimator (Sen et al. 2010). As a modification, a smoothed version of the bootstrap is appropriate, as is the m out of n bootstrap (Politis et al. 1999). Practically, this procedure necessarily involves choice of the 'block' size m. Asymptotically, m must be chosen so that m → ∞ and m/n → 0 as n → ∞ although these requirements provide little guidance for a finite sample size. Banerjee and Wellner (2005) suggest an intricate procedure for choice of m, based itself on bootstrapping. The method can be adapted to provide symmetric confidence intervals as these often perform better in finite samples. Banerjee and Wellner (2005) provide further implementation details. For current status data with misclassification, illustrative calculations of symmetric confidence intervals using the m out of n bootstrap are provided in Sect. 2.2.

Illustration and data example
First, Fig. 2a illustrates the unconstrained NPMLE and the NPMLE adjusted for misclassification for a hypothetical data set with sample size n = 500 generated from an exponential distribution, F, with mean 2. The monitoring times were selected at random from a uniform distribution on a set of discrete time values ranging from 0 to 3 at equal increments of 0.1. The classification rates used in generating the data were α = β = 0.8, and these values were assumed known in calculating the adjusted NPMLE. Note that, with α = β, the two estimators cross at F = 0.5, the estimated median time to occurrence; for time points below this value, the adjusted estimate of F is shifted downwards from the naive estimator as misclassifications are accounted for, and similarly shifted upwards at values of time above the estimated median.
Current status data on human papilloma virus (HPV) infection among women motivate and illustrate this work. The study consisted of 827 women aged 13.5-24.2 years examined in San Francisco (Moscicki et al. 1998). The data contained a binary indicator of whether a woman has HPV infection at the time of the survey (Y) and her age at screening (C).
Covariates included indicators of current smoking status and past infection with any other sexually transmitted disease (STD). For more information about the dataset see Neuhaus (1999) where it was assumed that HPV testing approach enjoyed (correct) classification rates of α = 0.8 and β = 0.9. We note that more advanced screening instruments for HPV are now available.
In this example we first need to consider the definition of the underlying failure time since HPV infection can sometimes go into remission in the sense that negative tests can plausibly follow an earlier true positive test. Here we define T to be age at first HPV infection as distinct from the cross-sectional prevalence interpretation used by Neuhaus (1999).
In this case, we allow for additional misclassification of apparently negative screens as such individuals may previously have been infected. We assume that such misclassification applies to 10% of negative screening results. This additional misclassification reduces the value of α to 0.73 [α = P(Δ = 1|Y = 1) = P(Δ = 1|Z = 1, Y = 1)P(Z = 1|Y = 1) + P(Δ = 1| Z = 0, Y = 1)P(Z = 0|Y = 1), where Z = 1 if individual has antibodies]. Based on the HPV data, Fig. 2b displays both the unconstrained NPMLE estimate of age at onset of HPV, and the NPMLE adjusted for misclassification with the assumed values α = 0.73 and β = 0.9, which allows for the additional misclassification discussed above. With these unequal classification probabilities, the two curves cross at F = 0.270, with the adjusted NPMLE shifted appropriately higher for higher ages. We do not see the shift downwards for lower ages since the first jump of the unconstrained NPMLE is to a value higher than 0.270.
95% symmetric confidence intervals were calculated for the adjusted (α = 0.73, β = 0.9) NPMLE using the m out of n bootstrap noted in Sect. 2.1. Table 1 provides the results of such calculations at three monitoring times for various choices of m ranging from 9 to 423.

Misclassification that varies over time
We now consider an extension of the simple constant (i.e. time independent) misclassification model to allow for the misclassification rates to vary over time. In particular, we consider the situation where one or both misclassifications occur only when the monitoring time is close to the time of the true event occurrence. This is natural for screening tests where accuracy may be essentially perfect far from the event time on either side but where misclassification is likely when screening is administered just before or after the event of interest. For example, with current status assessment of menopause, misclassification is unlikely for a woman of age 30 or 65, but may be plausible at age 50.
In diagnosing HPV infection, the probability of a false negative possibly decreases with time since infection.
We examine the simple extension where misclassification occurs only in a time window surrounding the true failure event T given by [T − A, T + A]. Within this interval we assume that the classification rates α, β > 0.5 are known, that perfect classification occurs at screening times outside the window, and that the value A is also known. Using these assumptions we obtain the following log-likelihood; Note, when A = 0 and A = ∞, (4) reduces to the conditional log-likelihood of the unconstrained NPMLE and the conditional log-likelihood with constant misclassification rates, respectively. The more complex conditional log-likelihood is still of the form given in (2) . However, finding the NPMLE of G* is complicated here by the fact that the constraint on G* (as c → 0) depends on the unknown value F(A). In addition, even if a reasonable estimator of G* is determined, it is not generally possible to solve for F in terms of G*. This identifiability issue is most easily seen when there is but a single monitoring time, C; in this situation, only G*(C) is identifiable from the data and differing values of F(C) (and F(C − A) and F(C + A)) are compatible with any given value for G*(C).
However, this does not address identifiability when the observed monitoring times cover a much broader range. In the latter situation, it is possible to make bias modifications to either the unconstrained or adjusted NPMLE to address an incorrect misclassification assumption. This allows the proposed and unconstrained estimators to accommodate a different window of misclassification than assumed by either estimator; the approach is formally introduced, discussed and evaluated via simulations in the next subsection.

Time-varying misclassification: simulations
We carried out a set of simulations to examine the implications of misclassification rates that vary over time. Data sets of unobserved event times, of sample size 500, were generated from an Exponential distribution, F, with mean 2. Current status observations were then created based on monitoring times selected at random from a Uniform distribution on a set of discrete time values ranging from 0 to 3 at equal increments of 0.2. Finally, the current status data were (mis)classified with classification probabilities of α = β = 0.8 if and only if |C i − T i | ≤ A in order to obtain the data set used in estimation. Outside this window the current status responses were observed without error. A variety of values of A were examined including A = 0 (no misclassification) and A = ∞ (constant misclassification).
For each data set, estimates of F were obtained according to both the unconstrained NPMLE and the proposed estimator of Sect. 2 that assumes constant misclassification rates at all times (i.e. assumes A = ∞). For non-extreme values of A, these two estimators were compared to determine which approach would be most accurate if it is suspected that the data is misclassified within a specific window and not misclassified otherwise. Each simulation consisted of 1000 data sets. Table 2 shows the results of both estimators of F at a selection of monitoring times, chosen systematically to depict the overall spread. These monitoring times are evaluated assuming windows of length A = 0 and A = ∞. The results are as expected where the NPMLE of no misclassification performs best for a window of A = 0 (where no individuals are subject to misclassification) and the proposed NPMLE, adjusted for constant misclassification, performs best for a window of A = ∞ (where all individuals are subject to misclassification with approximately 20% misclassified). Table 3 provides similar results where the window length varies, allowing approximately 60% and 82% of individuals to be subject to misclassification, the actual average percent misclassified also being indicated in the table. The results of Table 3 are perhaps not as expected where the adjusted NPMLE only outperforms the unconstrained NPMLE when a very high proportion of individuals are subject to misclassification. Even when 82% are subject to misclassification, evidence in favor of the adjusted NPMLE is not overwhelming.
In practice, an investigator necessarily does not know the underlying F and so cannot immediately assess which approximate NPMLE to use, the one that assumes no misclassification or the one that assumes a constant rate of misclassification over time.
In this situation, it is possible however to carry out a simulation using either estimator as the assumed 'true' F to examine performance. We examine this further in the next simulation with an additional wrinkle to the misclassification model in the non-extreme simulations.
If there is misclassification due to laboratory error in the (current status) screening instrument, all individuals will be subject to this error. However, even with constant laboratory misclassification, there may also be increased (and potentially asymmetric) misclassification rates close to the true failure event. Table 4 presents results of simulations from the HPV data where the true underlying distribution is assumed to be the unconstrained NPMLE as obtained through the standard pool-adjacent-violators algorithm. A constant laboratory error is assumed, giving classification rates of α = 0.8 and β = 0.9 outside the window and α = 0.73 and β = 0.9 within the window, indicating an additional deterioration in sensitivity close to the underlying failure time. In computing the constant misclassification adjusted NPMLE the values α = 0.73 and β = 0.9 were assumed.
In the simulations for the HPV data it must be noted that unlike the simulations in Tables  2 and 3, when A = 0 there is still misclassification present, at a constant rate of α = 0.8, β = 0.9. This explains the lack of accuracy in the unconstrained NPMLE for A = 0 which assumes no misclassification (and similarly for the constant misclassification adjusted NPMLE which uses the incorrect misclassification probabilities). When A = ∞ the results are as expected with the adjusted NPMLE more favorable as in this instance there is constant misclassification at rates α = 0.73, β = 0.9. Under the intermediate situations, with complex window misclassifications and non-zero and finite values for A, the simulations suggest that there is a slight preference for the adjusted NPMLE in terms of bias although there is a small price to be paid for additional variability. Mean squared error gives the nod here to the unconstrained NPMLE at least with these two possibilities for the window parameter A.
In either case, the simulations suggest a way to remove the bias for either estimator when A is finite and non-zero. The bias-adjusted algorithm is as follows: (i) compute a suitable simulation 'guess' for the F to be used in the simulations; (ii) simulate data assuming this 'guess' is the truth, with the assumed value for A and the relevant misclassification probabilities within and without the window defined by A; (iii) estimate the bias at all values of C of interest by comparing the simulation average with either of the original estimators; (iv) remove this estimated bias from the original estimator. Either the unconstrained NPMLE or the constant misclassification adjusted NPMLE could be used for the 'guess', although we prefer to hedge our bets by using the average of these two straightforward estimators since the simulations seem to suggest that the bias for the two estimators is sometimes in opposite directions, particularly in the tails where the biases tend to be most severe. Note that this algorithm can be used for more complex misclassification models that might be anticipated.
To formalize the above steps of the bias adjustment approach, note that the bias in the unconstrained NPMLE at t 0 is bias 0 t 0 = E F 0 t 0 , F − F t 0 , where F is the assumed true data generating distribution, and F 0 is the unconstrained NPMLE. We estimate the bias by substituting F g for F in each of the terms in bias 0 (t 0 ) and estimate the expectation through simulations, thus yielding bias 0 t 0 = E F 0 t 0 , F g − F g t 0 . This estimate, F g , could be the unconstrained estimate, F t 0 , F = F 0 t 0 , F , the estimate under constant misclassification, F t 0 , F = F ∞ t 0 , F , or the average of both estimates, F t 0 , F = F 0 t 0 , F + F ∞ t 0 , F /2.
Finally we produce the bias-adjusted estimate by F 0 ba t 0 = F 0 t 0 , F − bias 0 t 0 . Similarly, for the constant misclassified adjusted NPMLE, bias ∞ t 0 = E F ∞ t 0 , F g − F g t 0 where F g is chosen as before; this estimated bias can then be used to 'correct' F ∞ as before. Tables 3 and 4 provide the simulated performance of these bias adjusted versions of the original estimators for the same simulations considered before. In constructing the bias adjusted estimators, a sample size of 500 was used in step (ii) of the algorithm above and 1,000 simulations of step (ii) were carried out. It is clear from the results reported in Tables  3 and 4 that the bias adjusted estimators have significantly improved performance in terms of bias with only modest increases in variability. The improvement is more noticeable in Table 3 as the original bias is much greater. Note that bias adjustments can also be calculated when A = 0 but are not presented in the table.

Regression models
We briefly consider the extension of the above ideas to the regression context where interest focuses on the effects of a (potentially multidimensional) covariate X. Much of the literature on current status data has exploited the correspondence between standard regression models for the underlying failure time and generalized linear models for the observed current status outcome in both the parametric and semiparametric setting. These ideas are reviewed in Jewell and van der Laan (2004) and extended to more complex failure time data in Jewell (2007).
To adapt these techniques to accommodate misclassification we use the ideas of binary generalized linear models with outcomes subject to misclassification (Neuhaus 1999).
For example, assuming a Weibull regression model for T, the generalized linear model for Y in X and C involves g, the complementary log-log link function. We fit regression models to the HPV data (a) assuming no errors in the response variable (therefore using g directly), and (b) adjusting for errors with constant classification rates α = 0.73 and β = 0.9 (using g*).
These assumed classification rates allow both for laboratory error and the possibility that some negative tests fail to detect prior HPV infection as discussed in Sect. 2.2. Note that the parameter estimates in both models have proportional hazards interpretations on age at first infection with HPV, according to the Weibull regression model assumption for T, as distinct from the simple cross-sectional interpretations discussed in Neuhaus (1999). The results of both models are presented in Table 5, along with the observed ratio of parameter estimates. The generalized linear model induced by Weibull regression indicates that age at screening must be included in the model additively on the log scale. The standard errors were obtained from the observed information matrix and were calculated using PROC GENMOD in SAS version 9.1.
According to models (a) and (b), respectively, the hazard of first HPV infection are increased by 6 and 11% for those who currently smoke (Smoke now = 1) to those who do not smoke (Smoke now = 0), holding other covariates in the model fixed; clearly this effect is not significant. On the other hand, the hazard of HPV infection is reduced by 38 and 50% for those who have had any other prior sexually transmitted disease (STD = 1) compared with those who have not (STD = 0); this effect is quite strikingly significant, at least when misclassification is accounted for. As reported by Neuhaus (1999), the ratio of the parameter estimates suggest that ignoring the errors in the HPV screening test leads to substantially biased estimates of the associations of covariates with infection status, with the direction of the bias reflecting attenuation towards the null. Our findings are qualitatively similar to those of Neuhaus (1999) although we show a somewhat lower effect for prior STDs, presumably due to our allowance for additional error.

Discussion
We have discussed the NPMLE of a distribution function based on current status data subject to misclassification. The ideas are also easily extended to regression models for the underlying survival time. We have illustrated the latter using a parametric regression model. Alternative methods to allow for misclassification in the current status response include the simulation extrapolation (SIMEX) method (for the regression setting, see Hardin et al. 2003, for the SIMEX method applied to standard generalized linear models).
Recently, Küchenhoff et al. (2006) applied SIMEX to binary outcome data associated with a generalized linear model and compared results to the maximum likelihood approach espoused by Neuhaus (1999).
Although we considered a parametric regression model, semi-parametric survival models can also be analyzed using the ideas of Shiboski (1998) on semi-parametric generalized additive models. In this case, the technique of adjusting the link function to allow for misclassification, discussed in Sect. 2.5, can also be used. SIMEX provides an alternative approach. In addition, the bias adjustment algorithm discussed in Sect. 2.4 can also be applied in the regression context, in particular to allow for more complex misclassification models.
Throughout we have assumed that the misclassification rates and window of misclassification, if appropriate, are known exactly. In some cases, the rates may have to be estimated from a validation sample where the true response is measurable perhaps by use of an expensive 'gold standard' technique. This data can then be incorporated into a full likelihood that will then account for the uncertainty in estimation of the misclassification rates. In principal, a similar approach could be used for validation data that provided information on the value of A or the size of the misclassification window. However, estimation of the value of A is itself a much studied non-trivial estimation problem in detecting the time of transition of binomial classification rates. We leave these interesting extensions to future work.  a Hypothetical data (α = 0.8, β = 0.8). b HPV data (α = 0.73, β = 0.9). Estimated cumulative distribution functions for hypothetical data (F assumed Exponential with mean 2) and the HPV data. Both the unconstrained NPMLE obtained through the pool-adjacentviolators algorithm and the proposed adjusted NPMLE are presented  Simulation averages (standard deviations) of two estimators of the distribution function F (Exponential with mean 2) at 5 monitoring times, when the data generating distribution is either subject to always being misclassified (A = ∞), or never being misclassified (A = 0)    Simulation averages (standard deviations) of two estimators of the distribution function F (unconstrained NPMLE from the HPV data) at 5 monitoring times when the data generating distribution is subject to misclassification that varies with time  Estimates (and standard errors) of the log Relative Hazard (RH) for time to first HPV infection, which is assumed to follow a Weibull distribution