Natural History Model
The natural history model structure is presented in Fig. 1. The model was made up of eight health states including five disease progression states (i.e., no cirrhosis [F0–F3], compensated cirrhosis [F4]Footnote 2; DCC; HCC; and liver transplant), two SVR states (i.e., SVR, history of no cirrhosis; and SVR, history of compensated cirrhosis), and an absorbing mortality state (i.e., liver-related and non-liver-related death) which could be reached from any state. DCC was modeled as one health state [18, 20, 21]. Our model allowed variation in disease progression across genotypes [22, 23]. Firstly, the risks of cirrhosis and HCC have been shown to be higher in patients with GT3 compared to GT1 infected patients . Secondly, GT2 patients are at significantly lower risk and GT3 patients are at higher risk for long-term morbidity and mortality relative to GT1 patients [25, 26].
Patients entering the model initiated treatment through one of two initial fibrosis states (i.e., F0–F3 or F4). With successful treatment patients achieved SVR and transitioned to SVR states . In the absence of successful treatment patients either remained in their current health state or progressed to more severe stages of liver disease following natural disease progression.
In the model, patients could develop HCC from any SVR state, albeit at lower rates than patients who did not achieve SVR. In turn, patients who achieved SVR from compensated cirrhosis were assumed to face a higher risk of HCC than those who achieved SVR from no cirrhosis [27, 28]. A proportion of patients with compensated cirrhosis progressed to DCC [29, 30]. Some patients with DCC progressed to HCC, while a proportion received liver transplants. Patients with HCC could also receive liver transplants [21, 31, 32]. In addition, DCC, HCC, and liver transplant are commonly accepted as advanced stages of liver disease and thus we applied excess liver-related mortality risks [17, 18, 33]. Finally we assumed that spontaneous remission was not possible for patients with chronic HCV.
Table 1 shows model inputs such as patient characteristics, transition probabilities associated with fibrosis and non-fibrosis disease progression, genotype-specific fibrosis and non-fibrosis progression hazard ratios, and background age- and gender-adjusted probability of death.
Study Population and Treatment Comparators
In the base case we focused on GT1, treatment-naïve non-cirrhotic patients, who comprise the largest patient segment in Japan . In a PMOS of GLE/PIB, treatment-naïve patients accounted for 67.8% and non-cirrhotic patients accounted for 84.4%  of all patients with HCV. Fifty percent of patients with HCV had GT1 (of whom GT1b patients formed the vast majority), and 50.6% were male. The average age of the HCV population was 66.5 years. Using a segmented approach (i.e., the comparison of one intervention versus one comparator within a pre-specified patient segment, defined by patients’ treatment history, cirrhosis status, and/or genotype), we compared GLE/PIB versus other comparators approved for HCV treatment in Japan: sofosbuvir/ledipasvir (SOF/LDV) elbasvir/grazoprevir (EBR + GZR), daclatasvir/asunaprevir/beclabuvir (DCV/ASV/BCV), and no treatment. Since the combination of sofosbuvir and velpatasvir is only approved for patients who have failed on DAA or those with DCC in Japan, it is not a relevant comparator in the current segmented analysis which is restricted to treatment-naïve patients without cirrhosis.
Given that GLE/PIB is a pan-genotypic treatment, we also analyzed cost-effectiveness from a broader perspective to inform decision-making in the entire patient population with a portfolio approach. The portfolio approach involved the comparison of treatment strategies in combinations of patient segments (i.e., treatment history–cirrhosis status–genotype combination), which in turn enabled flexible computation of a pan-genotypic incremental cost-effectiveness ratio (ICER) for the overall HCV population of interest. Computationally, the model calculated outcomes for each segment, and aggregated costs, QALYs, and clinical outcomes by weighting each segment on the basis of the patients’ treatment history, cirrhosis status, and genotype distribution to obtain a consolidated, weighted portfolio ICER and clinical outcomes. In portfolio analysis, we compared the GLE/PIB portfolio to a portfolio comprising treatment with SOF/LDV in GT1–2 and SOF + ribavirin in GT3 patients. Although GT3–6 patients were eligible to enroll in the GLE/PIB trial in Japan, only GT3 patients ended up being recruited. Subsequently, approval of GLE/PIB in the GT3–6 segment was based on clinical trial data comprising GT3 patients only.
We extracted efficacy and duration data directly from Japanese phase III clinical trials [35,36,37,38,39,40,41,42,43], on the basis of the approved label for each regimen [14, 44,45,46,47]. Adverse event (AE) rates with DAA treatment were low; thus, AE costs had a negligible impact on overall cost and were excluded from the analysis. In the case of regimens with no Japanese trials, we used data from international trials . For regimens with multiple phase III trials for a given patient segment, we consolidated data across relevant trials . We used an intention-to-treat (ITT) perspective.
The expected treatment duration for each regimen was computed on the basis of labeled duration and trial-based discontinuation rates [35,36,37,38,39,40,41,42,43]. Table 2 shows the treatment efficacy for all patient segments included in the analysis for both the segmented and portfolio approach. For transparency, we reported SVR rate by patient segment.
Health state utilities were drawn from Ishida and Yotsuyanagi  (Table 1). Treatment-related health utility reflects the effect of treatment on quality of life over the treatment duration. Treatment-related health utility data were derived from published literature, when available [49, 50]. When no relevant published data existed, we made the simplifying assumption that treatment-related utility matched that observed in the AbbVie clinical trials of GLE/PIB [35, 36].
We included only direct medical costs in this study (Table 1) . Direct cost estimates for health states were taken from published Japanese studies [17, 28]. As a result of negligible inflation in Japan, cost data were not inflated from 2006 (for liver transplant-related health state costs) and 2014 (for all other health state costs) to the present year. Japanese guidelines support not inflating cost estimates . The cost per course of a therapy was calculated by multiplying daily cost of the regimen  and the mean (trial-based) duration of treatment. The DAA treatment options generally require little monitoring. Furthermore, these costs would be similar across the treatment options considered in this evaluation. Therefore, we also assumed that there were no on-treatment monitoring costs. All data were deidentified when used for this analysis. This article does not contain any studies with human participants or animals performed by any of the authors and did not require institutional review.
The model was developed following good modeling practices [53, 54]. We estimated the direct medical costs, liver outcomes, QALYs, and ICERs. Discount rates (costs, utilities and life years) in the base case were set to 2% as per Japanese guidelines [51, 55]. We assumed a payer WTP of JPY 5 million/QALY (USD 46,015/QALY)  as a threshold for assessing the cost-effectiveness of GLE/PIB with the net monetary benefit (NMB) approach . The NMB is a summary statistic that represents the net value of an intervention compared to an alternative health technology, considering the WTP threshold per QALY. A positive NMB indicates that the intervention is cost-effective compared to the alterative at the given WTP threshold. The NMB approach was chosen in favor of ICERs to report results as the NMB was easier to interpret in a situation where a treatment option is dominant.
In the base case analysis, we compared GLE/PIB to four DAAs and no treatment in treatment-naïve non-cirrhotic GT1 patients. We performed a sequential analysis to derive the cost-effectiveness frontier by eliminating sequentially dominated and extendedly dominated strategies.
In the context of multiple comparisons, pairwise comparisons of ICERs may be misleading . To establish a complete comparison of treatment options, we performed a fully incremental analysis which involved calculating the incremental QALY gains and costs for treatment options and ranking them by ascending costs. Options that were dominated (i.e., more expensive and less effective than one or more alternatives) or extendedly dominated (i.e., more expensive and less effective than a combination of two alternatives) were removed. The ICERs of each of the remaining options were then calculated as the additional costs divided by the additional QALYs by comparing one option with the next least costly . If one treatment dominates all the others, either by dominance or extended dominance then only that treatment option is considered cost-effective. The sets of remaining treatment options form the cost-effectiveness frontier, which represented the set of points corresponding to treatment alternatives that were considered to be cost-effective at different values of the cost-effectiveness threshold . Any option above, or to the left of the frontier, represented an inefficient option (i.e., suboptimal) as more QALYs were achievable at equal or lower costs (i.e., dominated or extendedly dominated) .
In scenario analyses, we assessed the cost-effectiveness of GLE/PIB by varying the method of comparison or key model parameters. In scenario 1, we adopted a portfolio approach whereby a pan-genotypic ICER for the overall GT1–3 HCV population was derived. This overall ICER was calculated as a weighted average of patient segments defined by genotype, treatment history, and cirrhosis status, with weights based on the Japanese HCV population. In this scenario analysis, we reported findings of a GLE/PIB portfolio in GT1–3 versus a sofosbuvir (SOF)-based portfolio (namely SOF/LDV in GT1–2 and SOF + ribavarin in GT3). In scenarios 2 and 3, we varied the baseline age by ± 5 years, namely a “low” age of 61.5 years and a “high” age of 71.5 years. The impact of discount rates was explored in scenario 4 (0%) and scenario 5 (4%).
Baseline demographics, background death rate, discount rates, regimen duration, and costs were not varied in deterministic sensitivity analyses (DSA) and probabilistic sensitivity analyses (PSA). The non-treatment-specific variables tested in DSA included transition probabilities related to disease progression, health state costs, and health utilities. For the PSA, 500 simulations were drawn from the variables’ distributions. For SVR rates, values of 100% were varied in the DSA and PSA using a method proposed by Briggs et al. . Several parameters were tested in multi-way sensitivity analysis including SVR rates in patients without cirrhosis and the GT-specific fibrosis and non-fibrosis progression hazard ratios. As a result of the lack of data, PSA variation on treatment-related utility change was only possible for GLE/PIB where a normal distribution was assumed. The results of the PSAs are summarized graphically using cost-effectiveness acceptability curves (CEAC). Each point on a CEAC indicates the percentage of simulations where a treatment option is cost-effective compared to the other treatment option for a specific WTP per QALY. Each CEAC line is obtained by varying the payer WTP/QALY from JPY 0 to 20 million. For each treatment option the CEAC is the line indicating the percentage of simulations where that strategy yields the highest NMB compared to the other treatment options. When comparing multiple treatment options for each WTP/QALY, the sum of all lines add up to 100%. Table 1 provides details of DSA and PSA inputs.