The systematic review methods adhered to published methods [19,20,21]. This study was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) network meta-analysis [4] (Electronic Supplementary Material [ESM] Appendix 1) and International Society for Pharmacoeconomics and Outcomes Research (ISPOR) guidelines [22].
This study included prospective randomised controlled trials (RCTs) of adults (aged ≥ 18 years) with a confirmed diagnosis of RRMS (≥ 85% of the study population) and treated with dimethyl fumarate (DMF), interferon (IFN) beta-1a, pegylated IFN (IFN) beta-1a, IFN beta-1b, natalizumab, glatiramer acetate (GA), fingolimod, teriflunomide, alemtuzumab, ocrelizumab, cladribine or placebo. This article is based on previously conducted studies and does not contain any studies with human participants or animals performed by any of the authors.
The key outcomes of interest for the MTC were ARR, 3- and 6-month confirmed disability progression (CDP3M and CDP6M, respectively) and SAEs.
MEDLINE, MEDLINE In-Process, MEDLINE Daily Update, MEDLINE Epub Ahead of Print, PubMed, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), Science Citation Index (SCI), National Institutes of Health (NIH) ClinicalTrials.gov, World Health Organization (WHO), International Clinical Trials Registry Platform (ICTRP), PharmNetBund, EU Clinical Trials Register (EUCTR), International Standard Randomised Controlled Trial Number (ISRCTN) Registry, electronic medicines compendium (eMC) and European Medicines Agency (EMA) register were searched for relevant studies from database inception to June 2018 without language or publication limits. The MEDLINE search strategy is shown in ESM Appendix 2. The reference lists of included articles were checked for additional relevant studies.
Two reviewers independently screened articles for inclusion, assessed study quality and performed data extraction. For each study, the background information (year of publication, other related publications, country, funding, study aim and treatment type) were extracted where available. Other specific data extracted were sample size, location/setting, methods employed (e.g. randomisation and allocation concealment, blinding), patient baseline characteristics (e.g. age, diagnosis, comorbidities, previous and concomitant treatments), interventions/study arms compared (description of interventions and comparators), outcomes assessed (e.g. definition of outcome, when assessed, who assessed, methods used to assess outcome[s]), results (e.g. numbers, percentages and effect sizes with confidence intervals [CIs; where relevant]) and follow-up time. Notably, for the outcomes of interest, ARR per arm was extracted as the total number of relapses/total number of patient-years of follow-up. For time-to-event outcomes (e.g. disability progression) the hazard ratio (HR) with 95% CI was extracted, where possible. SAE data were extracted according to the definition of each individual study, excluding MS relapse. The methodological quality of each study was assessed using the Cochrane risk-of-bias tool for RCTs [23]. Discrepancies at all stages of the review were resolved through discussion or consultation with a third reviewer.
Networks were created for each of ARR, CDP3M, CDP6M and SAE. ARR was analysed using the rate ratio (RR) at any time point ≥ 12 months. This cutoff was considered to be sufficient as ARR is essentially a value determined over 12 months. Less than 12 months was considered too short an interval to show a clinical effect. CDP3M or CPD6M were analysed using the HR for time-to-disability progression at 24 months as the effect estimate. SAE used the odds ratio (OR) at the 24-month follow-up as the effect estimate. The studies eligible for inclusion in the network were assessed for similarity based on the following characteristics: diagnosis, diagnostic criteria, age, gender, Expanded Disability Status Scale (EDSS) range, duration of disease, number of relapses prior to enrolment, previously treated patients, EU licensed doses only and follow-up time.
‘Head-to-head’ comparisons of treatments were performed in line with the Cochrane Handbook for Systematic Reviews of Interventions [20]. Forest plots of effect sizes, showing the results of individual studies, were prepared using the meta-package [24] in R software (R Foundation for Statistical Computing, Vienna, Austria) [25]. Where more than one study reported the same outcome measure for clinically similar populations, pooled effect estimates and 95% CIs were calculated using random-effects models. For pooled analyses of relative treatment effects (e.g. RR, HR), study weights were calculated using the generic inverse variance method. For pooled analyses of binary data based on the number of participants with an event and the total number of participants, study weights were calculated using the Mantel–Haenszel method. Assessment of publication bias was not possible due to a lack of sufficient studies. Heterogeneity was assessed using the I2 statistic.
All indirect comparisons and MTC methods used in this report are consistent with ISPOR task force recommendations for the conduct of direct and indirect meta-analyses [6, 7]. MTC was performed using a Bayesian approach using the gemtc package [26]. A burn-in of 50,000 simulations was used, followed by a further run of 50,000 simulations to obtain parameter estimates. Model convergence was assessed using the Brooks–Gelman–Rubin statistic [27]. Random-effects models were used. Model fit was assessed using residual deviance and the deviance information criterion. A frequency table was constructed from these rankings and normalised by the number of iterations to give the rank probabilities [25]. Sensitivity analyses were performed for fixed-effects models, the definition of SAEs and follow-up data.
In many cases, the data required as inputs to standard MTC models were not reported by the included studies. The missing values were calculated from the available data using standard methods [20, 28,29,30]. When missing values could not be estimated from the available data they were not included in the analysis.
Network diagrams for each outcome were assessed for the presence of loops where inconsistency may occur. For those networks with the potential for inconsistency, we used the node splitting method to check for evidence of inconsistency [3]. An inconsistency factor of exactly 1 would indicate that the indirect and direct estimates of the treatment comparison were exactly equal (≥ 2 studies per comparison were required). To further confirm the results of the MTC analysis, pairwise meta-analysis was compared to the relevant analyses.