KernelBased Visual Hazard Comparison (kbVHC): a SimulationFree Diagnostic for Parametric Repeated TimetoEvent Models
Abstract
Repeated timetoevent (RTTE) models are the preferred method to characterize the repeated occurrence of clinical events. Commonly used diagnostics for parametric RTTE models require representative simulations, which may be difficult to generate in situations with dose titration or informative dropout. Here, we present a novel simulationfree diagnostic tool for parametric RTTE models; the kernelbased visual hazard comparison (kbVHC). The kbVHC aims to evaluate whether the mean predicted hazard rate of a parametric RTTE model is an adequate approximation of the true hazard rate. Because the true hazard rate cannot be directly observed, the predicted hazard is compared to a nonparametric kernel estimator of the hazard rate. With the degree of smoothing of the kernel estimator being determined by its bandwidth, the local kernel bandwidth is set to the lowest value that results in a bootstrap coefficient of variation (CV) of the hazard rate that is equal to or lower than a userdefined target value (CV_{target}). The kbVHC was evaluated in simulated scenarios with different number of subjects, hazard rates, CV_{target} values, and hazard models (Weibull, Gompertz, and circadianvarying hazard). The kbVHC was able to distinguish between Weibull and Gompertz hazard models, even when the hazard rate was relatively low (< 2 events per subject). Additionally, it was more sensitive than the KaplanMeier VPC to detect circadian variation of the hazard rate. An additional useful feature of the kernel estimator is that it can be generated prior to model development to explore the shape of the hazard rate function.
KEY WORDS
model diagnostics nonlinear mixed effect models pharmacodynamics pharmacometrics repeated timetoevent modelsINTRODUCTION
Pharmacometric models are increasingly used to characterize the repeated occurrence of clinical events. Examples from literature include models for emetic episodes, postoperative analgesic events, bone events in Gaucher’s disease, and transient lower esophageal sphincter relaxation (1, 2, 3, 4). Repeated timetoevent (RTTE) modeling is theoretically superior to alternative methods like timetoevent, which only considers the first event, and count modeling, which treats the events as counts within time intervals (4). This is because RTTE modeling can take events into account at the time of their occurrence, provided that the event time is not intervalcensored. This is especially important when the variation of the hazard—e.g., during the day or after drug administration—is of interest (5,6).
Simulationbased diagnostics are the most commonly used diagnostics to evaluate pharmacometric RTTE models (7). The KaplanMeier visual predictive check (VPC) evaluates a model by comparing the observed and simulated KaplanMeier survival plots for every nth occurrence of an event (1,4). Another example is a VPC as proposed by Plan et al. in which observed and simulated events are discretized as counts within small time intervals (4). Recently, a hazardbased VPC—using nonparametric estimators of the hazard rate—has been proposed as a diagnostic of parametric timetoevent models by Huh and Hutmacher (8). T2he use of the hazard instead of survival in this diagnostic might allow for a more direct evaluation of the hazard model. However, the hazardbased VPC is also based on simulations.
A disadvantage of the use of simulationbased diagnostics is that one must be able to generate simulations that are representative of the study in which the original data were collected. If data are obtained from a study that includes features such as dose titration or informative dropout, the results from the simulations could be misleading if these features are not correctly accounted for (9). Although this can occasionally be remedied by including additional features in the simulations, this will considerably increase the complexity of the simulationbased analysis. More importantly, when specific study features cannot be correctly accounted for in simulations, the modeler has little options to evaluate a RTTE model. Although residualbased diagnostics have been proposed, their use in the pharmacometric literature is limited (10). Most proposed residuals are only defined for observed events, and their interpretation can be complicated. For example, the martingale and (modified) CoxSnell residuals can have highly skewed distributions even if the correct model is fitted to a dataset (2). As such, these residuals provide little information on the accuracy of the estimated shape of an underlying hazard function over time (2,10,11).
In this study, we present the kernelbased visual hazard comparison (kbVHC), a novel simulationfree diagnostic for RTTE models. The sensitivity and specificity of the kbVHC to detect model misspecifications were evaluated in simulated scenarios and compared with that of the KaplanMeier VPC. Based on these simulations, guidance for the use of the kbVHC is provided. We also discuss ηshrinkage in the context of RTTE modeling and its influence on the kbVHC.
METHODS
KernelBased Visual Hazard Comparison
Kernel Hazard Rate Estimator

Step 1: Define the search space for the bandwidths.
The user selects the number of time points (N_{time}) at which the minimum satisfactory bandwidth will be determined, and the number of bandwidths (N_{Bw}) that will be tested at these time points.

Step 2: Determine local kernel bandwidth on each of the N_{time} time points.

Step 3: Determine local kernel bandwidth for all time points.
The minimum satisfactory bandwidths derived at step 2 are smoothed with an Epanechnikov kernel with associated boundary kernels, as described by Muller and Wang (13). This allows the determination of local kernel bandwidths at every time point between T_{start} and T_{stop}. The bandwidth used to smooth the minimum satisfactory bandwidths from step 2 is constant at: 2 × (T_{stop}−T_{start}) / N_{time}.
After determining the local kernel bandwidth, this bandwidth is used to determine the nonparametric hazard rate over time. Another 1000sample bootstrap using the same kernel bandwidths is used to calculate the 95% confidence interval of this hazard rate. The kernelbased nonparametric hazard rate and its 95% confidence interval at each N_{time} are then plotted versus time and compared to the mean hazard rate of the parametric model (HAZ_{posthoc}) over time.
Evaluation of the kbVHC
To evaluate whether the kbVHC would be capable of distinguishing between true and misspecified models, evaluations were performed by simulating RTTE datasets, refitting these datasets with true and misspecified models in NONMEM, and then performing a kbVHC based on the NONMEM output.
Overview of Simulated Scenarios for Which kbVHC Was Evaluated
Simulation hazard model  Tested values 

All models  IHAZ(t) = PHAZ(t) × e^{ ηi } 
Number of subjects  50, 100, 125, 500 
Frailty (ω^{2})  0.09, 0.25, 0.5, 0.75, 1.0 
Constant hazard (with or without informative dropout)  PHAZ(t) = λ 
\( \mathrm{IDROP}=\mathrm{PDROP}\times {e}^{\kappa \times \eta i} \)  
λ [h^{−1}]  0.01, 0.05 
PDROP [h^{−1}]  0, 0.015 
Dropoutfree period [h]  36 
κ  − 2, − 1, 0, 1, 2 
Gompertz hazard  PHAZ(t) = λ × e^{γ × t} 
λ [h^{−1}]  0.005, 0.01, 0.025, 0.05 
γ [h^{−1}]  − 0.005 ∙ ln 2, − 0.01 ∙ ln 2, − 0.02 ∙ ln 2, − 0.05 ∙ ln 2 
Weibull hazard  PHAZ(t) = λγ(λt)^{γ − 1} 
λ [h^{−1}]  0.01, 0.025, 0.05, 0.1 
γ  0.5, 0.7, 0.9 
Circadianvarying hazard  \( \mathrm{PHAZ}(t)=\lambda \times \left[1+\mathrm{amp}\times \sin \kern0.2em \left(\frac{2\pi }{\mathrm{period}}\times \left(t+\mathrm{phase}\right)\right)\right] \) 
λ [h^{−1}]  0.005, 0.01, 0.05 
amp  0.1, 0.25, 0.5, 0.75 
period [h^{−1}]  24 
phase [h^{−1}]  0 
The simulated datasets were refitted in NONMEM 7.3 with different hazard models. The stochastic approximation expectationmaximization (SAEM) estimation method was used. The objective function value was obtained by performing the expectation step of the importance sampling (IMP) method using the final parameter estimates of the SAEM output (5). All Gompertz and Weibull datasets were fitted with both Weibull and Gompertz hazard models, while datasets with circadianvarying hazard were fitted with either a constant hazard or a circadianvarying hazard model.
After refitting, the kbVHC was constructed as described above using CV_{target} values from 5 to 40%. This was done to determine the impact of CV_{target} on the degree of smoothing of the kernel hazard rate and the performance of the kbVHC. N_{time} and N_{BW} were both set to 20, based on previous work (13). For selected illustrative datasets, the performance of the kbVHC was compared to that of the KaplanMeier VPC as described by Juul et al. (16).
Effect of ηShrinkage on HAZ_{posthoc}
The kbVHC makes use of individual post hoc estimates of the hazard for all noncensored subjects to calculate the mean modelpredicted HAZ_{posthoc}. As such, HAZ_{posthoc} might be affected by shrinkage of the individual estimate of the frailty term (ηshrinkage). Shrinkage occurs when the data are relatively uninformative on a subject level (e.g., relatively large number of subjects have relatively low number of events). As a result, the individual hazard estimates will shrink towards the population estimate.
To establish the occurrence of bias in the derived HAZ_{posthoc} induced by ηshrinkage, we calculated the bias for simulation scenarios that resulted in more than 20% ηshrinkage. In situations where this bias exceeded 10%, we evaluated whether the corrected HAZ_{posthoc} could restore the performance of the kbVHC. Unless otherwise specified, the kbVHC plots presented in this paper show the uncorrected HAZ_{posthoc}.
RESULTS
Figure 1 shows the simulationfree RTTE diagnostic for two simulated datasets based on Weibull and Gompertz models of 125 subjects, which were each fitted with a Weibull and Gompertz model. Fitting the true model to a dataset should result in a modelderived posthoc hazard rate (solid red line) that is comparable to the nonparametric kernel hazard rate (dashed black line). For a misspecified model, we expect to see a deviation from the kernel hazard rate and its confidence interval. It can be seen that HAZ_{posthoc} indeed follows the trend and falls within the 95% confidence interval of the kernel hazard rate when the dataset is fitted with the true model. In case of model misspecification (top right and bottom left), clear deviations of HAZ_{posthoc} from the kernel hazard rate can be observed.
Relation Between CV_{target} and Kernel Bandwidth
Figure 2 shows the kbVHC plots for a dataset with circadianvarying hazard that was fitted with the true model. When CV_{target} is set at 10%, kernel bandwidths are high (> 20 h as depicted in the lower plots). As a result, the kernel estimator “smooths over” the circadian variation of the hazard rate. When CV_{target} is increased to 25%, the kernel bandwidths are lowered to around 5 h, and the circadian variation becomes clear in the kernel hazard rate.
Figure 3 shows similar findings for a Weibull dataset fitted with the true Weibull model. When a CV_{target} of 10% is used, the maximum bandwidth (Bw_{max}) is used at every time point (left column, lower plot). As a result, the initial sharp decline of the hazard rate is not visible in the kernel hazard rate. When CV_{target} is increased to 25%, the bandwidth decreases (right column, lower plot) and the HAZ_{posthoc} of the true model matches the kernel hazard rate better.
Influence of the Number of Events on Resulting Bandwidth
When using a CV_{target} of 10%, only the largest dataset (500 subjects, 2007 events) reveals a circadian variation of the hazard rate that matches the modelderived HAZ_{posthoc}. Here, interquartile range of the local kernel bandwidths is between 5.2 and 6.1 h. The circadian variation remains clear for this dataset if the CV_{target} is increased to 20 or 30%, but the kernel hazard rate appears to be slightly undersmoothed in these cases. For the dataset with 125 subjects (415 events), the observed kernel hazard rate shows circadian variation when CV_{target} is set to 20 or 30%, but not with a CV_{target} of 10%. With the smallest dataset of 50 subjects (206 events), the CV_{target} needs to be set to at least 30% to reach local bandwidth that reveal circadian variation in the kernel hazard rate. The 24h circadian rhythm was generally not detected in the kernel hazard rate, when the kernel bandwidth exceeds 10–12 h.
Comparison with KaplanMeier VPC
Effect of ηShrinkage on HAZ_{posthoc}
The extent of shrinkage in the parametric model is associated with the amount of events per subject. For example, with an average 1.8 events per subject, the Gompertz fit of the Gompertz scenario shown in Fig. 1 resulted in a shrinkage of 35.6%. In a simulated scenario with a 90% lower typical hazard rate (i.e., average 0.18 events per subject), shrinkage increased to 65.7%. We also observed that shrinkage tends to be asymmetric—subjects with a low hazard were more strongly shrunk towards the population estimate than subjects with a higher than typical hazard. These findings reflect the lower informativeness of the subject’s data when that subject has little or no events.
DISCUSSION
In this study, we developed and evaluated the kbVHC, a simulationfree diagnostic to evaluate the structural submodel of RTTE models in a nonlinear mixedeffect setting. The kbVHC can be used to identify misspecification of the modelpredicted hazard over time.
Like the hazardbased VPC by Huh and Hutmacher, the kbVHC uses a nonparametric estimate of the hazard rate (8). However, there are several differences between the two methods. The hazardbased VPC is a diagnostic for timetoevent models, where the nonparametric hazard rate of the observed data is compared to the nonparametric hazard rate of many simulated datasets (8). The kbVHC is primarily intended as a diagnostic for RTTE models that relies on bootstrapping of the observed data to obtain a confidence interval of the nonparametric hazard rate, which is then compared to the mean posthoc hazard of an RTTE model.
In the studies we performed, we showed that the kbVHC was able to identify the correct model in datasets simulated with Gompertz and Weibull models, despite the fact that these models resemble each other in functional form (8). The kbVHC performs comparably to the KaplanMeier VPC in some situations while at the same time being easier to interpret (Figs. 1 and 5). In simulated scenarios with a rapidly changing hazard rate (Weibull and circadianvarying hazard), the kbVHC appeared to be more sensitive to model misspecification than the KaplanMeier VPC (Figs. 1, 5, and 6).
An additional advantage of the kbVHC over the KaplanMeier VPC is that it is does not require simulations. These simulations can be difficult and timeconsuming to generate, especially in situations with flexible study designs including dose titration or informative dropout (9). When not accounting for these features correctly, the KaplanMeier VPC can suggest model misspecification when there is none, as we have shown in Fig. 7 for a scenario with informative dropout. For the same dataset, the kbVHC correctly indicated that there was no model misspecification. It has to be mentioned however that in scenarios without a dropoutfree period, there may not be sufficient information to estimate the HAZ_{posthoc} of the subjects that dropout early, causing strong asymmetric shrinkage that could introduce bias in the kbVHC.
The computational time needed to generate the kbVHC is in the range of several seconds to minutes, which is considerably faster than the computational time needed to generate the KaplanMeier VPC. For example, the bottomleft kbVHC plot in Fig. 1 was generated within a minute, while the simulations for the KaplanMeier VPC of this dataset (Fig. 5, right column) took almost an hour (without parallelization). Additionally, calculation of the nonparametric hazard rate and its confidence interval is most timeconsuming, but this needs to be calculated only once for each dataset, as it is independent of the parametric model. An additional advantage of the kernel estimator of the hazard rate being independent of the parametric model is that it can be generated during exploratory data analysis, to inform model development at the start of an analysis. For example, the shape of the kernel hazard rate could inform which parametric hazard models might be appropriate and help determine plausible initial estimates of their parameters.
The degree of smoothing of the kernel estimator depends on a userdefined CV_{target}, but also on the number of events in the dataset. When CV_{target} is too low, the kernel oversmooths the hazard rate, and interesting features of the underlying data (such as circadian rhythm) might be obscured. When the CV_{target} is too high, the data will be undersmoothed which would introduce spurious patterns in the kernelestimated hazard rate. Based on the scenarios shown in Figs. 2, 3, and 4 (and multiple scenarios not shown here), we find the following settings to work well in practice: for datasets with less than 250 events, a suitable range of CV_{target} can be 15–40%; for datasets with 250–1000 events, a CV_{target} range of 10–30% may work well; and for datasets with more than 1000 events, a CV_{target} between 5 and 20% may work. These ranges provide guidance, but suitable CV_{target} values are contextspecific and should be established on a casebycase basis. The goal is to get the smoothest curve that still captures all important features of the data, similar to other nonparametric “smoothers,” such as the Loess regression (17). This can be done by examining the nonparametric hazard estimates at different CV_{target} values, and using expert knowledge to interpret whether patterns that emerge with less smoothing (i.e., higher CV_{target} values) represent important features of the data or spurious patterns.
The addition of the plot with the local kernel bandwidth over time aids in interpreting the kbVHC, as this will give information on timevarying features of the data that may or may not be captured (see Figs. 2 and 3). For example, with large bandwidths (> 15 h), the kernel estimator provides limited information about any 24h circadian variation of the hazard rate. Alternatively, to specifically test for a 24h circadian variation, one might considered to fix the bandwidth at 6 to 10 h instead of using the CV_{target}based approach used in this paper.
As no diagnostic test can evaluate all relevant aspects of a model, the kbVHC should be seen as complimentary to other model diagnostics. Contrary to simulationbased diagnostics like the VPC, where confidence intervals represent variability in simulated data, the confidence interval in the kbVHC represents precision of the kernel hazard estimate at the given kernel bandwidth. Therefore, the confidence interval of the kbVHC cannot be used to evaluate the statistical submodel or the covariate model of a nonlinear mixedeffects model. Moreover, the width of the 95% confidence interval of the nonparametric kernel hazard rate is directly dependent on the userdefined CV_{target}. As a result, the HAZ_{posthoc} of the true model does not necessarily have a 95% probability to be within the 95% confidence interval of the nonparametric hazard rate, as seen in Fig. 2. The confidence interval can nevertheless (qualitatively) aid the user in determining whether patterns or deviations from HAZ_{posthoc} in the nonparametric hazard rate are of spurious or more structural nature.
Like many other model diagnostics that rely on empirical Bayesian estimates, the kbVHC can be affected by high levels of ηshrinkage (9,18). For kbVHC, ηshrinkage can introduce a bias of the modelpredicted HAZ_{posthoc}. An important advantage of the use of the EBEs is that the kbVHC can in some cases account for the influence of dose titration or informative dropout as demonstrated in Fig. 7. When for instance using the estimated variance of the frailty term to predict the mean hazard rate, this property would be lost. An alternative to using EBEs would be to use random sampling from conditional distribution of the individual parameters, as proposed by Lavielle and Ribba (19); however, these are not readily available in NONMEM 7.3 output.
In most tested scenarios, this bias was low (< 10%) even when there were moderate levels of ηshrinkage (± 30%). Higher bias was observed for scenarios with a low average number (< 1) of events per subject and high variance of the frailty term (ω^{2} ≥ 0.5). To maintain acceptable performance of the kbVHC in these situations, we have proposed a simple correction method for log normally distributed frailty terms. Since frailty is often log normally distributed in pharmacometric RTTE models, we anticipate that this correction method will increase the usefulness of the kbVHC in practice. It should however be mentioned that in situations with informative dropout or dose titration, the frailty terms are not independent of the typical hazard, and the proposed correction method should not be used.
Additionally, pharmacometric RTTE models will often be linked to a pharmacokinetic model, with the predicted drug concentration affecting the modelpredicted hazard rate. EBE shrinkage in the pharmacokinetic parameters can affect the predicted drug concentrations, thereby introducing bias in HAZ_{posthoc} and the resulting kbVHC output. The same limitation is to be expected in simulationbased RTTE model diagnostics from a sequentially performed pharmacodynamics analysis in which pharmacokinetic parameters are fixed to their individual (EBEbased) values. With simulationbased RTTE diagnostics, this limitation can be negated by also including interindividual variability in the pharmacokinetic parameters for the simulations, but for the kbVHC, this is not possible.
As is the case for all pharmacometric visual diagnostics, the evaluation of the kbVHC is mostly qualitative. This limited the feasibility of analyzing and reporting on many repetitions of similar scenarios. We did, however, evaluate the kbVHC in many scenarios (see Table I), and a selection of the most illustrative plots was made for this paper.
CONCLUSION
We have developed a simulationfree diagnostic for RTTE models based on a nonparametric kernel estimation of the hazard rate derived from observed events. The kbVHC has a good sensitivity for structural model misspecification, even outperforming the existing KaplanMeier VPC for timevarying hazard models. Because the kbVHC does not require simulations, it can also be used in situations where appropriate simulations are difficult to generate. Like other diagnostics that rely on empirical Bayesian estimates, the kbVHC can be affected by high levels of ηshrinkage through a biased HAZ_{posthoc}. However, we found that this bias can be approximated and corrected for, when the model includes log normally distributed frailty. An additional useful feature of the kernel estimator is that it can already be generated prior to model development to explore the shape of the hazard rate function. These advantages make the kbVHC a valuable addition to the diagnostic toolbox for RTTE models.
Supplementary material
References
 1.Juul RV, Rasmussen S, Kreilgaard M, Christrup LL, Simonsson US, Lund TM. Repeated timetoevent analysis of consecutive analgesic events in postoperative pain. Anesthesiology. 2015;123(6):1411–9. https://doi.org/10.1097/ALN.0000000000000917.CrossRefPubMedGoogle Scholar
 2.Cox EH, VeyratFollet C, Beal SL, Fuseau E, Kenkare S, Sheiner LBA. Population pharmacokineticpharmacodynamic analysis of repeated measures timetoevent pharmacodynamic responses: the antiemetic effect of ondansetron. J Pharmacokinet Biopharm. 1999;27(6):625–44. https://doi.org/10.1023/A:1020930626404.CrossRefPubMedGoogle Scholar
 3.Vigan M, Stirnemann J, Mentre F. Evaluation of estimation methods and power of tests of discrete covariates in repeated timetoevent parametric models: application to Gaucher patients treated by imiglucerase. AAPS J. 2014;16(3):415–23. https://doi.org/10.1208/s122480149575x.CrossRefPubMedPubMedCentralGoogle Scholar
 4.Plan EL, Ma G, Nagard M, Jensen J, Karlsson MO. Transient lower esophageal sphincter relaxation pharmacokineticpharmacodynamic modeling: count model and repeated timetoevent model. J Pharmacol Exp Ther. 2011;339(3):878–85. https://doi.org/10.1124/jpet.111.181636.CrossRefPubMedGoogle Scholar
 5.Karlsson KE, Plan EL, Karlsson MO. Performance of three estimation methods in repeated timetoevent modeling. AAPS J. 2011;13(1):83–91. https://doi.org/10.1208/s1224801092483.CrossRefPubMedPubMedCentralGoogle Scholar
 6.Plan EL. Modeling and simulation of count data. CPT Pharmacometrics Syst Pharmacol. 2014;3(8):e129. https://doi.org/10.1038/psp.2014.27.CrossRefPubMedPubMedCentralGoogle Scholar
 7.Plan EL. Pharmacometric methods and novel models for discrete data. Uppsala University 2011. Available from: http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva150929. Accessed 8 Aug 2016.
 8.Huh Y, Hutmacher MM. Application of a hazardbased visual predictive check to evaluate parametric hazard models. J Pharmacokinet Pharmacodyn. 2016;43(1):57–71. https://doi.org/10.1007/s1092801594549.CrossRefPubMedGoogle Scholar
 9.Karlsson MO, Savic RM. Diagnosing model diagnostics. Clin Pharmacol Ther. 2007;82(1):17–20. https://doi.org/10.1038/sj.clpt.6100241.CrossRefPubMedGoogle Scholar
 10.Holford N. A time to event tutorial for pharmacometricians. CPT Pharmacometrics Syst Pharmacol. 2013;2(5):e43. https://doi.org/10.1038/psp.2013.18.CrossRefPubMedPubMedCentralGoogle Scholar
 11.Collett D. Model checking in the Cox regression model. In: Collett D, editor. Modelling survival data in medical research. Boca Raton, FL: CRC Press; 2015. p. 131–70.Google Scholar
 12.Chiang CT, Wang MC, Huang CY. Kernel estimation of rate function for recurrent event data. Scand Stat Theory Appl. 2005;32(1):77–91. https://doi.org/10.1111/j.14679469.2005.00416.x.CrossRefPubMedPubMedCentralGoogle Scholar
 13.Muller HG, Wang JL. Hazard rate estimation under random censoring with varying kernels and bandwidths. Biometrics. 1994;50(1):61–76. https://doi.org/10.2307/2533197.CrossRefPubMedGoogle Scholar
 14.Beal SL, Sheiner LB, Boeckmann AJ, Bauer RJ. NONMEM Users Guides. Ellicott City: Icon Development Solutions; 2010.Google Scholar
 15.Nyberg J, Karlsson KE, Jönsson S, Simonsson USH, Karlsson MO, Hooker AC. Simulating large timetoevent trials in NONMEM. Population Approach Group Europe. 2014. Available from: https://www.pagemeeting.org/default.asp?abstract=3166. Accessed 20 May 2016.
 16.Juul RV, Nyberg J, Lund TM, Rasmussen S, Kreilgaard M, Christrup LL, et al. A pharmacokineticpharmacodynamic model of morphine exposure and subsequent morphine consumption in postoperative pain. Pharm Res. 2016;33(5):1093–103. https://doi.org/10.1007/s1109501518535.CrossRefPubMedGoogle Scholar
 17.Jacoby WG. Loess: a nonparametric, graphical tool for depicting relationships between variables. Elect Stud. 2000;19(4):577–613.CrossRefGoogle Scholar
 18.Savic RM, Karlsson MO. Importance of shrinkage in empirical bayes estimates for diagnostics: problems and solutions. AAPS J. 2009;11(3):558–69. https://doi.org/10.1208/s1224800991330.CrossRefPubMedPubMedCentralGoogle Scholar
 19.Lavielle M, Ribba B. Enhanced method for diagnosing pharmacometric models: random sampling from condition distributions. Pharm Res. 2016;33(12):2979–88. https://doi.org/10.1007/s1109501620203.CrossRefPubMedGoogle Scholar
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.