On the role of transparency and reproducibility
Transparency and reproducibility are hallmarks of quality scientific research due to their relationship with independent verification (Stodden, 2020). Open data and open code contribute to both by allowing the scientific community to more easily verify the authenticity of purported scientific discoveries and their supporting evidence. Data sharing also allows others to reuse other researchers’ data sets for further analysis or to supplement their own data, contributing to new insights within their field of study.
These factors are especially important in cases where scientific research may quickly and directly impact clinical practice or public policy, such as research on the COVID-19 pandemic. Among many other impacts on the research landscape, COVID-19 has increased the popularity of pre-prints from both a production and consumption standpoint. The number of COVID-19 pre-prints posted to medRxiv increased in the early stages of the pandemic, while non-COVID-19 pre-print numbers were largely as expected. The same trends were apparent in abstracts accessed by medRxiv users, where COVID-19 pre-print abstracts were viewed over 15 times more than non-COVID-19 pre-print abstracts (Fraser et al., 2021). For these reasons, it is important to examine open science standards and reproducibility within pre-print repositories.
Open data is generally accepted to be beneficial to the scientific process and to a paper’s reproducibility potential, hence it is concerning that around 75% of pre-prints in our sample contained no open data markers. This concern is slightly mitigated by recognition of challenges in working with biomedical data compared with data in other fields, notably privacy and ethics concerns when working with personal data (Floca, 2014). The COVID-19 pandemic has seen open science initiatives, as evidenced by the creation of open data repositories such as the dashboard maintained by the Center for Systems Science and Engineering at Johns Hopkins University (Dong et al., 2020) or the large number of publishers who removed paywalls from published COVID-19 research (Gill, 2020). While the intention at the start of the pandemic was that there would be ‘clear statements regarding the availability of underlying data’ (Wellcome, 2020) some retractions of work have been based on ‘unreliable or nonexistent data’ (da Silva et al., 2021a).
Open code as an open science marker is context and field-dependent; for instance, not all biomedical research papers will rely on computational methods for their analyses. However, in pre-prints where code comprises a large portion of the methodology or results, posting it openly to repositories like GitHub contributes to a pre-print’s potential reproducibility. This is important when computational methods are used to form predictions about emerging situations with limited data or laboratory research, which was the case for modelling studies in the early days of the COVID-19 pandemic. We also see growing concern over the quality and consequences of this sort of research, with bioRxiv no longer allowing purely computational work (Kwon, 2020).
The other concern is the adverse selection issue caused by meeting the open science aims of sharing code and data. Authors that share their data and code open their work up to criticism. If authors who make their data and code available make similar mistakes to authors who choose to not publish their data and code, it is more likely that the mistake would not be noticed in the case where data and code were not published. The current system is biased against those who follow best practice. McGuinness and Sheppard (2021) advocate for ‘(s)trict editorial policies that mandate data sharing,’ and other changed norms are needed.
The role of pre-print repositories
There has been a large amount of research on COVID-19 (da Silva et al., 2021b). Many concerns have arisen from the rate at which COVID-19 research has been posted and consumed through pre-print repositories, particularly in the early stages of the pandemic (Raynaud et al., 2020). Rushed scientific research has the potential to skip (or at least place less importance on) open science practices, so it may be reasonable to expect a decrease in open data or code markers in the initial few months of the pandemic. We found little relationship between date posted and likelihood of having open data or code markers, with the proportion of pre-prints containing these markers fluctuating from month to month. This suggests that open science practices are more influenced by other factors, perhaps including training, publication bias, or the nature of the pre-print itself. On the other hand, we do not see an overall long-term increase in either open data or open code markers throughout our period of analysis, which we may have expected in the context of the open science movements the pandemic has fostered. Although not pre-print specific, Else (2020) found that overall research output has fluctuated between different fields and topics (namely modelling disease spread, public health, diagnostics and testing, mental health, and hospital mortality) throughout different stages of the pandemic, which may account for some of the fluctuation and overall lack of noticeable trend in our sample.
To emphasize the ongoing need for open data and code in modelling a pandemic, we consider two high profile epidemiological models that emerged in early 2020. Modelling was conducted by Imperial College London (ICL) (Ferguson et al., 2020) and the Institute for Health Metrics and Evaluation (IHME) at the University of Washington (Murray, 2020), and both were initially posted to pre-print repositories. The ICL model went on to become the most cited pre-print as of December 2020 (Else, 2020), and both had significant influence over policy and public health decisions worldwide (Adam, 2020). An independent review of these two models by Jin et al. (2020) found that while code and data were openly available for both, only the ICL model was reproducible due to limited transparency on the underlying methodology of the IHME model. The open-source nature of these models was fundamental to reproduction attempts and is an example of the need for open data and code in COVID-19 research, particularly as pre-prints influence public decision-making.
In the context of the above factors, it was encouraging to find in our analysis that the proportion of pre-prints with open data or code posted to arXiv increased from 7% pre-pandemic to 25% for COVID-19-related pre-prints. This pattern, however, was not observed among the analyzed bioRxiv and medRxiv pre-prints, and may just reflect the nature of COVID-19 pre-prints. With many pre-prints from these repositories still pertaining to epidemiological modelling, one might hope that they should universally be subject to the same analysis as conducted by Jin et al. (2020) as for the examples above, which is made possible by the availability of relevant code and data. Our analysis suggests a need for future investigation and potential overall improvement in open science standards for these types of pre-prints (subject to the data and code considerations already discussed). This need is again emphasized by the new-found speed at which pre-prints may gain public, media, and political attention in the context of the pandemic, particularly those from medRxiv and bioRxiv. One further concern is raised by Teixeira and Jaime (2020), who shows that there are pre-prints on those two pre-print servers—medRxiv and bioRxiv—that were withdrawn or retracted with relatively little information about the underlying reason, after gaining substantial media attention.
The importance of open data and open code
Beyond pre-prints, COVID-19 has influenced publication and peer review processes, with timelines for COVID-19 papers being expedited at the expense of longer waits for other scientific research (Else, 2020). It is important that open data and code standards be maintained in published work as well. In our sample, published pre-prints contain open data or code markers in similar proportions to their unpublished counterparts, a pattern that was present for pre-prints related to COVID-19 and those posted in 2019. This appears initially to alleviate some concerns over the relationship between open data and publication bias, that is, the potential that journals have favored novel yet less transparent or reproducible papers over those with null results but a high standard of open science practices. However, publication bias is complex, and this result should be approached with caution. Concerns have already been raised through systemic reviews of COVID-19 publications (Raynaud et al., 2020), and oversights in data accessibility have led to high profile retractions of publications in the past; for example, papers from The Lancet and the New England Journal of Medicine which were withdrawn due to concerns over the private nature of their underlying dataset (Ledford and Richard, 2020). Cabanac et al. (2021) show that not all pre-prints are linked to their subsequent peer-reviewed publication, which may further bias our results. Additionally, there is the potential for bias due to older pre-prints having had more time to be published than newer pre-prints. And Bero et al. (2021) and Oikonomidi et al. (2020) show that differences between updated versions of the same pre-print can be substantial; again, this is something that we do not account for and could bias our results.
In all fields of science, increasing access to data and code used for pre-printed or published research is a step in the direction of more transparent, reproducible, and reliable research. The COVID-19 pandemic has created a novel, constantly changing scientific culture that should be navigated with care to uphold standards of scientific practice for both the research community and the safety of the public. Our analysis shows that there is room for improvement in the areas of open data and code availability within COVID-19 pre-print papers on arXiv, bioRxiv, and medRxiv
There is demand for timely research and high frequency results because the pandemic rapidly evolves. Pre-prints are efficient in this role because there is no time spent on peer review. They also allow lesser-known researchers to better disperse their research because of the possibility that fast-tracked peer review may be biased towards established researchers. While there is a clear need for pre-prints, the point remains that they do not go through the peer review process. This question of quality and validity is particularly pertinent in the COVID-19 context because poorly validated results and false information may spread quickly and have real effects. We are not saying that peer review implies that a paper is of a high-quality; we are instead saying that the provision of code and data alongside the pre-print goes some way to allowing others to trust the findings of pre-prints, even though they have not been peer-reviewed. One way this could be encouraged would be for all pre-print repositories to have authors characterize the extent to which they have adopted open science practices as part of their submission, in the same way that is done in SocArXiv. Although those pre-prints that do not adopt these practices should not be rejected from pre-print repositories, greater clarity around this would be useful and might move the state-of-the-art forward.
Weaknesses and next steps
Future work would expand our analysis to consider the geographic distribution of research and the potential influence of different practices and policies concerning open science. This is important because the epicenter of the pandemic changed throughout the pandemic, which may have implications for our time-based analysis.
A logical next step would be to extend this analysis to additional pre-print servers. We have begun considering samples of pre-pandemic and COVID-19-related pre-prints posted to SocArXiv, a social sciences pre-print server hosted by the Center for Open Science. We validated the ODDPub algorithm against the presence of data links provided by pre-print authors upon submission (available in the pre-print metadata drawn from the Open Science Framework API) and found that the algorithm performed with 52% sensitivity on the 2019 sample and 29% sensitivity for COVID-19-related pre-prints. The high rate of false negatives for open data detection is concerning, and it was decided that the ODDPub algorithm is not suitable for use on pre-prints from this server without modification. A more generalized (or perhaps field-specific) algorithm would be necessary for analysis of open data and code availability in SocArXiv and other more specialized servers. Details of this validation are available in Appendix C.
We recognize that factors beyond open data and code play a large role in the reproducibility of scientific research. Not all pre-prints providing open data or code will be reproducible. Factors such as data documentation, methodological reporting, software choice, and many others all play a role in the reproduction process and should be regarded with just as much gravity when disseminating results.
An important weakness is the potential presence of false negatives in indicators of publication in our dataset. Abdill and Ran (2019) estimate that the false-negative rate may be as high as 37.5% for data pulled from the bioRxiv API, meaning analysis of published papers may represent only a fraction of those that have been published. It is unclear to what extent this is the case for other repositories or what bias may exist in the subset of pre-prints for which publication was detected, because it is likely that this process relies on title-based text matching (Abdill and Ran, 2019). It is also likely that some of our more recent sampled pre-prints will be published in the future which we could not account for at the time of our data collection.
Our paper depends on search responses from the various repositories, which are based on our selection of keywords. Our selection of keywords is not exhaustive, for instance, perhaps ‘the pandemic’ could result in additional papers. Future work could make this keyword approach more systematic, for instance following King et al. (2017).
We also recognize that this analysis relies heavily on text-based analysis which was not verified directly in most cases and may lead to higher levels of uncertainty. The oddpub package was built to analyze biomedical publications and it may be that some of the differences that we find between repositories are due to this. We also note that the ODDPub algorithm is relatively narrow in its definition of “open,” excluding data that is available via registration or in some other restricted form. Considering a broader definition of openness, either through using a less restrictive algorithm or through manual verification, would likely produce different results particularly for pre-prints using clinical data. Future work could take smaller sub-samples to validate factors like publication status, paper topic, and open code and data status, beyond the approaches we used here.