Abstract
Background
Preprints are increasingly used to disseminate research results, providing multiple sources of information for the same study. We assessed the consistency in effect estimates between preprint and subsequent journal article of COVID-19 randomized controlled trials.
Methods
The study utilized data from the COVID-NMA living systematic review of pharmacological treatments for COVID-19 (covid-nma.com) up to July 20, 2022. We identified randomized controlled trials (RCTs) evaluating pharmacological treatments vs. standard of care/placebo for patients with COVID-19 that were originally posted as preprints and subsequently published as journal articles. Trials that did not report the same analysis in both documents were excluded. Data were extracted independently by pairs of researchers with consensus to resolve disagreements. Effect estimates extracted from the first preprint were compared to effect estimates from the journal article.
Results
The search identified 135 RCTs originally posted as a preprint and subsequently published as a journal article. We excluded 26 RCTs that did not meet the eligibility criteria, of which 13 RCTs reported an interim analysis in the preprint and a final analysis in the journal article. Overall, 109 preprint–article RCTs were included in the analysis. The median (interquartile range) delay between preprint and journal article was 121 (73–187) days, the median sample size was 150 (71–464) participants, 76% of RCTs had been prospectively registered, 60% received industry or mixed funding, 72% were multicentric trials. The overall risk of bias was rated as ‘some concern’ for 80% of RCTs. We found that 81 preprint–article pairs of RCTs were consistent for all outcomes reported. There were nine RCTs with at least one outcome with a discrepancy in the number of participants with outcome events or the number of participants analyzed, which yielded a minor change in the estimate of the effect. Furthermore, six RCTs had at least one outcome missing in the journal article and 14 RCTs had at least one outcome added in the journal article compared to the preprint. There was a change in the direction of effect in one RCT. No changes in statistical significance or conclusions were found.
Conclusions
Effect estimates were generally consistent between COVID-19 preprints and subsequent journal articles. The main results and interpretation did not change in any trial. Nevertheless, some outcomes were added and deleted in some journal articles.
Similar content being viewed by others
Background
The scientific community has witnessed a significant shift in the way research findings are disseminated due to the COVID-19 pandemic and the subsequent rise of preprints [1, 2]. Preprints are early versions of scientific research papers that are made publicly available before they have undergone formal peer review and publication. By circumventing the lengthy peer review process, preprints allow for rapid communication on new evidence to inform public health responses. This is particularly crucial during pandemics. Notably, results of the world’s largest COVID-19 platform trial, RECOVERY [3], were first reported as preprints, enabling swift, real-time evaluation of the interventions and potential harms. While discussing the benefits of preprints in patient care, lead RECOVERY author, Peter Horby, emphasized that peer review delays could have life-threatening consequences [4].
Without formal peer review and rigorous quality control, preprints can amplify misleading information stemming from biases, methodological limitations, incomplete analyses, and even fraud [5]. Preprint use has been scrutinized both from a public understanding perspective and in regards to scientific principles. Firstly, there is a concerning lack of understanding of preprint data among the general public. For example, widespread media attention given to two small, biased preprints that erroneously claimed smoking to be protective against COVID-19 impacted public health as it resulted in a surge in nicotine purchases and smoking uptake in certain countries [6].
Secondly, it is reasonable to expect some discrepancies between the content of various documents and sources for the same randomized controlled trial (RCT), particularly between the preprint and the subsequent journal article, as peer review often impacts the content of a manuscript before it is published. A meta-research study of 139 studies reported in preprint and subsequent journal article or in different versions of the preprint found a change in the abstract’s conclusion in 24% of studies [7]. In contrast, a study of 78 preprint–article pairs of RCTs showed consistency in terms of the completeness of reporting [8]. Another analysis of 67 interventional and observational studies found that preprints and their subsequent journal articles were similar in terms of reporting and spin (i.e., distorted interpretation of results) [9]. Similarly, a study of 74 preprint–article pairs of RCTs showed few important differences in treatment effect estimates between the two documents [10].
To further explore the consistency between various documents reporting the results of trials, we assessed the consistency in effect estimates between preprints and subsequent journal articles of COVID-19 RCTs included in a large living systematic review of COVID-19 pharmacological treatments.
Methods
The protocol is available on Open Science Framework (https://osf.io/hfrp4/?view_only=b06282a8429e4ae1af458f4e372576f7). Here, we report the results of objective one - to assess the consistency in the estimates of treatment effects between preprints and the subsequently published articles. We expanded our sample size by including RCTs assessing all pharmacological treatments instead of limiting our analysis to specific treatment types as planned in the protocol. Additionally, we updated the final search to July 20, 2022.
Data source and search
Our study used the data and methods of the COVID-NMA living systematic review (covid-nma.com) [11] [see Methods S1 in the Additional file]. Briefly, COVID-NMA is a living evidence synthesis and living mapping of RCTs on interventions for the prevention and treatment of COVID-19. The search strategy was modified over time to involve searching only two bibliographic databases: the Epistemonikos L-OVE COVID-19 platform [12] and Cochrane COVID-19 Study Register [13]. The Retraction Watch database [14] was also searched to identify retracted trials and directly remove them from the COVID-NMA review (Additional file Table S1). Screening and data extraction were performed by pairs of researchers, independently and in duplicate, with disagreements resolved by consensus and a third researcher, when necessary.
Eligibility criteria
We selected eligible RCTs in the COVID-NMA living systematic review that evaluated pharmacological treatments for patients with COVID-19 and that were originally posted as preprints and subsequently published in a peer-reviewed journal. The last search date was July 20, 2022. We considered the following COVID-NMA-defined critical outcomes:
-
Clinical improvement at day 28 (D28) defined as a hospital discharge or improvement on the scale used by trialists to evaluate clinical progression and recovery.
-
WHO Clinical Progression Score of level 7 or above (i.e., mechanical ventilation +/– additional organ support or death) (D28).
-
All-cause mortality (D28).
-
Incidence of any adverse events.
-
Incidence of serious adverse events.
We excluded RCTs evaluating preventive interventions (e.g., use of personal protective equipment, movement control strategies), vaccines, non-pharmacological treatments, and supportive treatments for patients admitted to the intensive care unit. We also excluded RCTs that did not report any critical outcome and that reported different analyses in both documents (e.g., interim analysis reported in the preprint and final analysis reported in the journal article).
Linking preprint and subsequent journal article
The linkage between the preprint and journal article was performed as part of the COVID-NMA living systematic review. The preprint–article linker was developed in collaboration with a research team from the French National Centre for Scientific Research. The tool automatically generated an alert when a preprint was updated or published as a journal article. Pairs of researchers used the tool to identify these subsequent reports and then extracted any additional and/or updated data independently, meeting for consensus to reconcile any disagreements. Consequently, an accurate record of the corresponding preprint and journal publication reports in the COVID-NMA database is available for download as a preprint-publication pair. To identify eligible RCTs, one researcher (MD) retrieved this record from the COVID-NMA database and selected the first preprint posted on a preprint server and the subsequent journal article. When available, we used the online publication date in order to calculate the delay between preprint post and journal article publication. Otherwise, we used the print publication date.
Data extraction
We retrieved data that were previously extracted in duplicate independently by pairs of researchers, with consensus to resolve disagreements for the COVID-NMA living systematic review: publication type (preprint, journal article), publication date (date that the report was published online, when available), trial registration (prospective, retrospective relative to the start date of the trial), funding type (industry, mixed, public, none, not reported/unclear), study centers (single, multicentric), setting (hospital, outpatient clinic), geographical RCT location according to the World Bank Country Income Classification [15], and intervention details.
For the critical outcome measures under consideration, the number of participants with outcome events and the number of participants analyzed were retrieved. Risk of bias was assessed according to the Cochrane Risk of Bias 2 tool [16] and each outcome result was rated as ‘Low’, or ‘Some concerns’, or ‘High’ risk of bias. Particularly, we considered the overall risk of bias assessments i.e., the highest risk of bias found in any domain for any critical outcome in the trial. The previously extracted data were split into two parts and two researchers (MD, CG) verified these data, meeting for consensus if a discrepancy was found.
Data synthesis
For the descriptive analysis, frequencies and percentages were calculated for categorical variables, while medians with interquartile ranges (IQRs) were calculated for continuous variables.
We systematically explored whether the number of participants with outcome events, number of participants analyzed, and treatment effect estimates were consistent between preprints and subsequent journal articles for all critical outcomes. The discrepancies between results reported in a preprint and subsequent journal article were classified as (1) change in the estimate of the effect of at least one outcome, (2) change in the direction of the effect, (3) change in statistical significance, and (4) change in the overall conclusion. We also investigated whether the outcomes were deleted or added in the journal articles compared to the preprints. We used R software, [17] with the metafor [18] and forestplot [19] packages, for all analyses.
Results
Of the 49,651 records screened, 1230 were assessed for eligibility and we identified 135 treatment RCTs that were originally posted as a preprint and subsequently published as a journal article. We excluded 26 RCTs because they did not conform to eligibility criteria; one preprint was removed from the preprint server, three RCTs were excluded because there was an error in data retrieval (i.e., they were incorrectly labelled in the COVID-NMA database as a preprint but the data were from trial registry results (n = 2) and from the journal article (n = 1)), three RCTs evaluated non-pharmacological treatments, six RCTs did not report any critical outcomes and 13 RCTs reported interim analysis in the preprint and final analysis in the journal article. Increased sample sizes and longer follow-up and enrolment periods were observed in the final analyses of the subsequent journal articles compared to the interim analyses of the preprints. Overall, 109 RCTs were included in the analysis (Fig. 1).
Characteristics of preprints that were subsequently published in a journal article are presented in Table 1. The median delay between preprint and peer-reviewed journal article was 121 (IQR, 73–187) days. The median sample size was 150 (IQR, 71–464) participants, 76% of RCTs had been prospectively registered, 60% received industry or mixed funding, 72% were multicentric trials. The overall risk of bias assessed was rated as ‘some concern’ for 80% of RCTs.
Of the 109 preprint–article pairs of RCTs, 81 were consistent for all outcomes. We found six RCTs with at least one outcome missing in the journal article, and 14 RCTs with at least one outcome added in the journal article compared to the preprint. There were nine RCTs that had at least one outcome with a change in the number of participants with outcome events or the number of participants analyzed, which yielded a minor change in the estimate of the effect (Fig. 2) [20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37]. There was one RCT with a change in the direction of the effect. No changes in the statistical significance or overall conclusions between preprint and journal article were observed for any RCT.
Characteristics of the preprints that were never published in a peer-reviewed journal are compared to those that were published (Additional file Table S2). Generally, we found that basic characteristics of RCTs initially posted as preprints were similar between those that were subsequently published and those that were not.
Discussion
In this study, we analyzed the consistency in treatment effect estimates between RCTs first available as a preprint and subsequently published in a peer-reviewed journal. We found only trivial discrepancies between COVID-19 preprints and subsequent journal articles in most pharmacological treatment RCTs. Nevertheless, some outcomes were added and deleted in the journal articles compared with the preprints and one trial showed a change in the direction of effect between preprint and subsequent journal article.
Our study findings demonstrate substantial agreement with the conclusions of other COVID-19 studies. In a retrospective review of 74 RCTs included in a living network meta-analysis [38, 39] up to August 2021, Zeraatkar et al. did not observe important discordance between the first preprint and subsequent journal article [10]. The cross-sectional study by Bero et al. found only marginal changes to outcomes reporting and spin between 67 preprint–article pairs of studies published between March and October 2020 [9]. In contrast, in a meta-research study of preventive, therapeutic, or post-acute care interventions for COVID-19, Oikonomidi et al. found significant changes in results and abstract’s conclusions in 55% of the sample of 66 preprint–article studies published up to August 2020 [7].
While over half (58%) of preprints are subsequently published in a peer-reviewed journal [40], the fact is that some will remain unpublished, due to journal rejection because of poor methodological and statistical quality or, in rare cases, lack of submission. Based on this, some suggest that preprints should be excluded from meta-analyses [41]. Thus, as part of objective two of our protocol, we conducted a meta-epidemiological study, selecting 37 meta-analyses at different timepoints that included both preprint and journal article RCTs [42].
Strengths and limitations
We assessed the consistency of results between preprint and journal article pairs of RCTs, as significant changes found in the subsequent journal article bring the reliability of preprint data into question. Furthermore, our data were retrieved from a large living systematic review (COVID-NMA). COVID-NMA employed a validated, comprehensive search strategy to identify all relevant evidence.
There are some limitations of our assessment. Firstly, this research was conducted on COVID-19 RCTs, so results may not be generalizable to other fields and study types. In non-COVID-19-related studies, Carneiro et al. [43] determined that preprints were lacking in reporting quality but, on average, the quality of reporting between preprints and subsequent journal articles was comparable. Another study found small differences in journal article conclusions of 7.2% of non-COVID-19–related and 17.2% of COVID-19–related abstracts compared to the preprint [44]. Secondly, for those preprints that were never published in a journal, we could not evaluate whether peer review prevented journal publication due to unsupported conclusions. Nevertheless, we found that trial characteristics were generally similar between preprints that were subsequently published in peer-reviewed journals and those that remained unpublished. Finally, our study is limited to the decisions of the living review. For example, protocol revisions could affect the sample composition.
Conclusion
We identified changes in effect estimates in 8% of COVID-19 randomized controlled trials between preprint and subsequent journal article. Some outcomes were deleted or added in the journal articles; therefore, it is important to retrieve both documents and explore reasons for discrepancies. Certainly, a critical approach should be adopted when using results from preprints due to the lack of peer review.
Data availability
The data and code used during the current study are available at https://github.com/MDavids0n/Preprint_Journal.
References
Kirkham JJ, Penfold NC, Murphy F, Boutron I, Ioannidis JP, Polka J, et al. Systematic examination of preprint platforms for use in the medical and biomedical sciences setting. BMJ Open. 2020;10(12):e041849.
Kwon D. How swamped preprint servers are blocking bad coronavirus research. Nature. 2020;581(7807):130–1.
Results — RECOVERY Trial [Internet]. [cited 2023 Jun 1]. Available from: https://www.recoverytrial.net/results.
Horby P. Why preprints are good for patients. Nat Med. 2022;28(6):1109–9.
Flanagin A, Fontanarosa PB, Bauchner H. Preprints involving medical research—do the benefits outweigh the challenges? JAMA. 2020;324(18):1840–3.
van Schalkwyk MCI, Hird TR, Maani N, Petticrew M, Gilmore AB. The perils of preprints. BMJ. 2020;370:m3111.
Oikonomidi T, Boutron I, Pierre O, Cabanac G, Ravaud P, the COVID-19 NMA Consortium. Changes in evidence for studies assessing interventions for COVID-19 reported in preprints: meta-research study. BMC Med. 2020;18(1):402.
Kapp P, Esmail L, Ghosn L, Ravaud P, Boutron I. Transparency and reporting characteristics of COVID-19 randomized controlled trials. BMC Med. 2022;20(1):363.
Bero L, Lawrence R, Leslie L, Chiu K, McDonald S, Page MJ, et al. Cross-sectional study of preprints and final journal publications from COVID-19 studies: discrepancies in results reporting and spin in interpretation. BMJ Open. 2021;11(7):e051821.
Zeraatkar D, Pitre T, Leung G, Cusano E, Agarwal A, Khalid F et al. Consistency of covid-19 trial preprints with published reports and impact for decision making: retrospective review. BMJ Med [Internet]. 2022 Oct 1 [cited 2022 Nov 7];1(1). Available from: https://bmjmedicine.bmj.com/content/1/1/e000309.
Boutron I, Chaimani A, Meerpohl JJ, Hróbjartsson A, Devane D, Rada G, et al. The COVID-NMA project: building an evidence ecosystem for the COVID-19 pandemic. Ann Intern Med. 2020;173(12):1015–7.
Living Overview of the. Evidence (L·OVE) [Internet]. [cited 2021 May 19]. Available from: https://app.iloveevidence.com/loves/5e6fdb9669c00e4ac072701d.
Cochrane COVID-. 19 Study Register [Internet]. [cited 2021 May 19]. Cochrane COVID-19 Study Register. Available from: https://covid-19.cochrane.org/?sf=publishedDate&sd=desc.
Retracted coronavirus. (COVID-19) papers [Internet]. Retraction Watch. 2020 [cited 2023 Jun 1]. Available from: https://retractionwatch.com/retracted-coronavirus-covid-19-papers/.
World Bank Country and Lending Groups. – World Bank Data Help Desk [Internet]. [cited 2022 Nov 17]. Available from: https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups.
Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366:l4898.
R Core Team. (2022). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. [Internet]. Available from: https://www.r-project.org/.
Viechtbauer W. Conducting Meta-analyses in R with the metafor Package. J Stat Softw. 2010;36:1–48.
Gordon M, Lumley T, forestplot. Advanced Forest Plot Using grid Graphics [Internet]. 2022. Available from: https://CRAN.R-project.org/package=forestplot.
Group RC, Horby PW, Mafham M, Peto L, Campbell M, Pessoa-Amorim G et al. Casirivimab and imdevimab in patients admitted to hospital with COVID-19 (RECOVERY): a randomised, controlled, open-label, platform trial [Internet]. medRxiv; 2021 [cited 2022 Dec 7]. p. 2021.06.15.21258542. Available from: https://www.medrxiv.org/content/https://doi.org/10.1101/2021.06.15.21258542v1.
Abani O, Abbas A, Abbas F, Abbas M, Abbasi S, Abbass H, et al. Casirivimab and Imdevimab in patients admitted to hospital with COVID-19 (RECOVERY): a randomised, controlled, open-label, platform trial. The Lancet. 2022;399(10325):665–76.
Mobarak S, Salasi M, Hormati A, Khodadadi J, Ziaee M, Abedi F et al. Evaluation of the effect of sofosbuvir and daclatasvir in hospitalised COVID-19 patients: a randomized double-blind clinical trial (DISCOVER) [Internet]. Rochester, NY; 2021 [cited 2022 Nov 21]. Available from: https://papers.ssrn.com/abstract=3792895.
Mobarak S, Salasi M, Hormati A, Khodadadi J, Ziaee M, Abedi F, et al. Evaluation of the effect of sofosbuvir and daclatasvir in hospitalized COVID-19 patients: a randomized double-blind clinical trial (DISCOVER). J Antimicrob Chemother. 2022;77(3):758–66.
Group TRC, Horby PW, Estcourt L, Peto L, Emberson JR, Staplin N et al. Convalescent plasma in patients admitted to hospital with COVID-19 (RECOVERY): a randomised, controlled, open-label, platform trial [Internet]. medRxiv; 2021 [cited 2022 Dec 7]. p. 2021.03.09.21252736. Available from: https://www.medrxiv.org/content/https://doi.org/10.1101/2021.03.09.21252736v1.
Abani O, Abbas A, Abbas F, Abbas M, Abbasi S, Abbass H, et al. Convalescent plasma in patients admitted to hospital with COVID-19 (RECOVERY): a randomised controlled, open-label, platform trial. The Lancet. 2021;397(10289):2049–59.
Zhang J, Rao X, Li Y, Zhu Y, Liu F, Guo G et al. High-dose vitamin C infusion for the treatment of critically ill COVID-19 [Internet]. In Review; 2020 [cited 2022 Dec 7]. Available from: https://www.researchsquare.com/article/rs-52778/v1.
Zhang J, Rao X, Li Y, Zhu Y, Liu F, Guo G, et al. Pilot trial of high-dose vitamin C in critically ill COVID-19 patients. Ann Intensive Care. 2021;11(1):5.
Murai IH, Fernandes AL, Sales LP, Pinto AJ, Goessler KF, Duran CSC et al. Effect of Vitamin D3 Supplementation vs Placebo on Hospital Length of Stay in Patients with Severe COVID-19: A Multicenter, Double-blind, Randomized Controlled Trial [Internet]. medRxiv; 2020 [cited 2022 Dec 7]. p. 2020.11.16.20232397. Available from: https://www.medrxiv.org/content/https://doi.org/10.1101/2020.11.16.20232397v1.
Murai IH, Fernandes AL, Sales LP, Pinto AJ, Goessler KF, Duran CSC, et al. Effect of a single high dose of vitamin D3 on hospital length of stay in patients with moderate to severe COVID-19: a Randomized Clinical Trial. JAMA. 2021;325(11):1053–60.
Temesgen Z, Burger CD, Baker J, Polk C, Libertin C, Kelley C et al. Lenzilumab Efficacy and Safety in Newly Hospitalized Covid-19 Subjects: Results from the Live-Air Phase 3 Randomized Double-Blind Placebo-Controlled Trial [Internet]. medRxiv; 2021 [cited 2022 Dec 7]. p. 2021.05.01.21256470. Available from: https://www.medrxiv.org/content/https://doi.org/10.1101/2021.05.01.21256470v1.
Temesgen Z, Burger CD, Baker J, Polk C, Libertin CR, Kelley CF, et al. Lenzilumab in hospitalised patients with COVID-19 Pneumonia (LIVE-AIR): a phase 3, randomised, placebo-controlled trial. Lancet Respir Med. 2022;10(3):237–46.
Kyriazopoulou E, Poulakou G, Milionis H, Metallidis S, Adamis G, Tsiakos K et al. Early Anakinra Treatment for COVID-19 Guided by Urokinase Plasminogen Receptor [Internet]. 2021 May [cited 2021 Sep 8] p. 2021.05.16.21257283. Available from: https://www.medrxiv.org/content/https://doi.org/10.1101/2021.05.16.21257283v1.
Kyriazopoulou E, Poulakou G, Milionis H, Metallidis S, Adamis G, Tsiakos K, et al. Early treatment of COVID-19 with anakinra guided by soluble urokinase plasminogen receptor plasma levels: a double-blind, randomized controlled phase 3 trial. Nat Med. 2021;27(10):1752–60.
Quinn TM, Gaughan EE, Bruce A, Antonelli J, O’Connor R, Li F et al. Randomised controlled trial of intravenous nafamostat mesylate in COVID pneumonitis: phase 1b/2a experimental study to investigate safety, pharmacokinetics and pharmacodynamics [Internet]. medRxiv; 2021 [cited 2022 Nov 21]. p. 2021.10.06.21264648. Available from: https://www.medrxiv.org/content/https://doi.org/10.1101/2021.10.06.21264648v1.
Quinn TM, Gaughan EE, Bruce A, Antonelli J, O’Connor R, Li F et al. Randomised controlled trial of intravenous nafamostat mesylate in COVID pneumonitis: phase 1b/2a experimental study to investigate safety. Pharmacokinet Pharmacodynamics eBioMedicine. 2022;76.
Sullivan DJ, Gebo KA, Shoham S, Bloch EM, Lau B, Shenoy AG et al. Randomized Controlled Trial of Early Outpatient COVID-19 Treatment with High-Titer Convalescent Plasma [Internet]. medRxiv; 2021 [cited 2022 Dec 7]. p. 2021.12.10.21267485. Available from: https://www.medrxiv.org/content/https://doi.org/10.1101/2021.12.10.21267485v1.
Sullivan DJ, Gebo KA, Shoham S, Bloch EM, Lau B, Shenoy AG, et al. Early Outpatient Treatment for Covid-19 with Convalescent plasma. N Engl J Med. 2022;386(18):1700–11.
Siemieniuk RA, Bartoszko JJ, Zeraatkar D, Kum E, Qasim A, Martinez JPD, et al. Drug treatments for covid-19: living systematic review and network meta-analysis. BMJ. 2020;370:m2980.
Bartoszko JJ, Siemieniuk RAC, Kum E, Qasim A, Zeraatkar D, Ge L, et al. Prophylaxis against covid-19: living systematic review and network meta-analysis. BMJ. 2021;373:n949.
Eckmann P, Bandrowski A, PreprintMatch. A tool for preprint to publication detection shows global inequities in scientific publication. PLoS ONE. 2023;18(3):e0281659.
Brietzke E, Gomes FA, Gerchman F, Freire RCR. Should systematic reviews and meta-analyses include data from preprints? Trends Psychiatry Psychother 45:e20210324.
Davidson M, Evrenoglou T, Graña C, Chaimani A, Boutron I. No evidence of important difference in summary treatment effects between COVID-19 preprints and peer-reviewed publications: a meta-epidemiological study. J Clin Epidemiol. 2023;162:90–7.
Carneiro CFD, Queiroz VGS, Moulin TC, Carvalho CAM, Haas CB, Rayêe D, et al. Comparing quality of reporting between preprints and peer-reviewed articles in the biomedical literature. Res Integr Peer Rev. 2020;5(1):16.
Brierley L, Nanni F, Polka JK, Dey G, Pálfy M, Fraser N, et al. Tracking changes between preprint posting and journal publication during a pandemic. PLOS Biol. 2022;20(2):e3001285.
Acknowledgements
The authors thank Elise Diard (Centre d’Epidémiologie Clinique, CRESS, INSERM U1153, Hôtel-Dieu [AP-HP], Cochrane France) for her work on the website and extraction tool development. We thank Rouba Assi, and Hillary Bonnet (Centre d’Epidémiologie Clinique, CRESS, INSERM U1153, Hôtel-Dieu [AP-HP], Cochrane France) who verified a small portion of the data. We also thank Carolina Riveros (Centre d’Epidémiologie Clinique, CRESS, INSERM U1153, Hôtel-Dieu [AP-HP], Cochrane France) who led the screening for the COVID-NMA initiative, as well as all members of the COVID-NMA consortium.
Funding
No specific funding has been received for this research. MD received a PhD fellowship from the Université Paris Cité. Data were generated in the context of the COVID-NMA initiative which received funding from Université Paris Cité, Assistance Publique Hôpitaux de Paris (APHP), Inserm, Cochrane France (Ministry of Health), the French Ministry of Higher Education and Research, Agence Nationale de la Recherche (ANR), and the World Health Organization (WHO).
Author information
Authors and Affiliations
Contributions
MD, AC, and IB conceived and designed the study. MD, TE and AC conducted the statistical analyses. MD and CG were involved in the acquisition of the data. All authors were involved in the interpretation of the data. MD drafted the manuscript. All authors critically reviewed the manuscript. All authors read and approved the final manuscript. AC and IB supervised the work.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Supplementary Material 1:
Definitions of trial characteristics; Methods S1; Table S1; Table S2; Figure S1
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
About this article
Cite this article
Davidson, M., Evrenoglou, T., Graña, C. et al. Comparison of effect estimates between preprints and peer-reviewed journal articles of COVID-19 trials. BMC Med Res Methodol 24, 9 (2024). https://doi.org/10.1186/s12874-023-02136-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s12874-023-02136-8