Skip to main content

Optimizing peer review to minimize the risk of retracting COVID-19-related literature


Retractions of COVID-19 literature in both preprints and the peer-reviewed literature serve as a reminder that there are still challenging issues underlying the integrity of the biomedical literature. The risks to academia become larger when such retractions take place in high-ranking biomedical journals. In some cases, retractions result from unreliable or nonexistent data, an issue that could easily be avoided by having open data policies, but there have also been retractions due to oversight in peer review and editorial verification. As COVID-19 continues to affect academics and societies around the world, failures in peer review might also constitute a public health risk. The effectiveness by which COVID-19 literature is corrected, including through retractions, depends on the stringency of measures in place to detect errors and to correct erroneous literature. It also relies on the stringent implementation of open data policies.

NEJM and Lancet high-profile retractions: an academic wake-up call

The back-to-back retractions in early June, 2020 of two highly publicized COVID-19 papers in New England Journal of Medicine (NEJM) (Mehra et al. 2020a, b) and The Lancet (Mehra et al. 2020b) as a result of unreliable or non-existent Surgisphere-derived data, upon which those papers’ analyses were based, serves as an important alert to the biomedical academic community. Surgisphere is a firm that specializes in hospital and health care analytics. A thematically related SSRN preprint by two of the same authors (Amit Patel and Sapan Desai), and cited by Sharun et al. (2020), also silently disappeared.Footnote 1 Silently retracted, withdrawn or disappearing COVID-19 preprints are not being sufficiently discussed by academics, but they constitute a fundamental threat to the integrity of open access (Teixeira da Silva 2020a). On the heels of those high-profile retractions, an editorial by Wallis (2020) that allured to the creation of a COVID-19 Severity Scoring Tool based on a collaboration between SurgisphereFootnote 2 and the African Federation for Emergency Medicine (AFEM), was retracted and republished. To date (November 3, 2020), the Mehra et al. (2020a, b) papers have, according to Google Scholar (GS), accrued 591 and 701 citations, respectively. Currently, there are 19 versions of Mehra et al. (2020a) and 59 versions of Mehra et al. (2020b) shown in GS and no indication at GS that the articles have an expression of concern or that they have been retracted.Footnote 3 A Web of Science search with the title “Cardiovascular disease, drug therapy, and mortality in Covid-19” yielded three results with Mehra et al. (2020a) showing 203 citations in core collections. Many of these papers were published after the expression of concern and retraction. The expression of concern is cited 5 times, and the retraction notice has 45 citations. The Web of Science search for the Mehra et al. (2020b) paper using the title yielded two results, namely the retraction notice which is cited 173 times and the expression of concern cited 8 times. A PubMed search yielded 148 citations for Mehra et al. (2020a), 9 citations for the expression of concern and 104 citations for the retraction notice. A PubMed search also yielded 195 citations for Mehra et al. (2020b), 15 citations for the expression of concern and 82 citations for the retraction notice.

TheNEJM and The Lancet retractions, which took place within days after errata and or expressions of concern had been published, call for added awareness, reflection, and reformation. These two journals, in particular, are not traditionally associated with exploitative or predatory behavior, which are often characterized by poor or non-existent peer review and editorial oversight (Teixeira da Silva et al. 2019). The NEJM has a 2019 Clarivate Analytics journal impact factor (JIF) of 74.699 while the 2019 JIF of The Lancet is 60.392. Those three retracted papers, if used or cited, may constitute a public health risk because potentially dangerous and/or misleading information related to public health was released to the public, presenting them as clinically and academically valid studies. According to the Guardian,Footnote 4 the World Health Organization (WHO) halted trials on hydroxychloroquine (HCQ) in response to the findings of the Mehra et al. (2020b) paper, but soon after its retraction, the WHO resumed trials. Fogel (2018) listed 11 reasons why clinical trials might fail, one of them being unreliable data.

A survey byDeGruyter of scientists indicated that about 50% of respondents “have ‘no time at all’ or ‘less time’ for research and writing”.Footnote 5 This is an important issue since the peer reviewer pool that is serving as the base for the peer review of COVID-19 papers likely also stems from this same group of academics. Thus, conventional correction of the literature may be insufficient. Academics have to thus build their own defenses and strategies to certify the legitimacy of their research (Lakens 2020) because peer review, even in top-ranked journals, may be fallible. As the number of retracted COVID-19 papers increases (Abritis et al. 2020; Teixeira da Silva et al. 2020), this will allow case studies to serve as learning points for fortified ethics guidelines and policies.

Risks of select publishing practices during the pandemic

One possible reason why an academic might continue to cite unreliable literature that poses a danger to public health may be the poor indication of the paper’s retracted status: an inconspicuous small red notice on the header of the retracted NEJM paper versus a bold across-the-page “retraction” stamped on the retracted The Lancet paper. Another possibility may be the existence of unretracted copies on social media, third-party websites or the black/pirate open access site, Sci-Hub. Some papers that cited these now-retracted papers (e.g. Boulware et al. 2020) might themselves need to be corrected, while the metrics of these journals also needs to be adjusted (Teixeira da Silva and Dobránszki 2018).

Another risk may be the rise or fortification of a parallel academic publishing market(s), of low quality and/or predatory in nature, that might be exploiting COVID-19 papers for profit (extraction of article processing fees), or that authors might be exploiting for quick and easy publications (Teixeira da Silva 2020b). The trade-off between rigor in peer review and speed to publish COVID-19 research critical for public health (Fig. 1) should be less of a consideration for top journals where editors need to maintain strict publishing standards (Kun 2020; Matias-Guiu 2020; Palayew et al. 2020).

Fig. 1

Trade-off between rigor and speed in peer review. Pressure to achieve the latter may result in a compromise of the former. This phenomenon, which has become acute in the COVID-19 era, has particularly serious reputational consequences for high-ranking journals

Some editors noted a negative impact on the acceptance of peers to review papers, citing peers to be overworked or preoccupied as reasons to decline the peer review of papers (Toth 2020). Others called on editors to curtail requests for additional experiments, and to shorten the revision period, suggesting that some peer review might be rushed and that some results might be too provisional or superficial, potentially lowering academic standards rather than upholding them (Eisen et al. 2020; Horbach 2020). There are also risks of badly written, superficial and inaccurate systematic reviews (Yu et al. 2020).

There is also risk that other important health issues are given less priority. Researchers focusing on other important contagious diseases, such as Ebola or influenza, or aspects that they might perceive as essential to humanity, such as climate change, HIV/AIDS, cancer research, malnutrition, or others, might view COVID-19-related research as currently receiving preferential treatment, i.e., a crowding-out effect, or pandemic research exceptionalism (London and Kimmelman 2020). Barakat et al. (2020) found the median time to publish COVID-related papers was eight times faster than the control group of other non-COVID-related papers in the previous year.

Early data (January 1 to June 30, 2020) from Clarivate Analytics’ Web of Science and Elsevier’s Scopus indicates that less than 50% of papers published on COVID-19 were original research papers, the majority of the remaining literature being documents such as editorials, letters or perspectives, while a small fraction (approximately 0.5–0.8%) were errata and/or retractions (Teixeira da Silva et al. 2020). According to Di Girolamo and Reynders (2020), the majority of COVID-19 published papers are a secondary type of articles (i.e., editorials, opinions, letters) compared to the H1N1 pandemic, recommending that action be taken to flatten the curve of such articles in order not to dilute important new knowledge about the disease.

The analysis byTeixeira da Silva et al. (2020) also revealed that not all COVID-19 papers and their data sets are OA, even though they should be. This is because several publishers pledged to make all work related to COVID-19 OA, in response to a call by National Science and Technology Advisors from 12 countries on March 13, 2020,Footnote 6 signing the Wellcome consensus statement, in a bid to fortify the open science and open data (OD) movements, and as a noble gesture of aiding humanity and sustaining the integrity of medical science. An analysis of COVID-19-related research that should be OA, but is not, in contravention of that agreement, is needed. Furthermore, researchers working on other important diseases might feel that their work also deserves OA status, similar to COVID-19-related research, as well as fair peer review and editorial handling.Footnote 7

Separately, and unrelated to the retracted Mehra et al. (2020a, b) papers, Zhuang et al. (2020) was withdrawn (i.e., retracted) due to public criticism, and the public file was deleted. Despite this, the paper and its abstract are still listed at ResearchGate,Footnote 8 but the paper is not indicated as retracted, thus inviting academics to cite this paper, which has already accrued 28 citations according to GS. A paper by Funck-Brentano and Salem (2020) that cited Mehra et al. (2020b) was retracted and subsequently republished as Funck-Brentano et al. (2020), and while its ResearchGate page shows a “retracted” label, the PDF file is still of the unretracted paper.Footnote 9 Incidentally, the Funck-Brentano and Salem (2020) paper, which was retracted a few days after publication, had been cited by the coordinator of the White House coronavirus task force to support a decision against importing COVID-19 tests.Footnote 10 The Funck-Brentano and Salem (2020) paper has accrued 29 citations according to GS, despite its retracted status. The use, reliance on, or citation of COVID-19 literature that has been badly reviewed, non-reviewed (preprints), or retracted, may pose a public health risk, because misinformation and incorrect perceptions may be perpetuated (Ioannidis 2020; Jacobsen and Vraga 2020). The risk is fortified in the health sector where practitioners are pressed for time, under-resourced, and where advice can spread by word-of-mouth (Martin 2017).

Measures needed to minimize risks associated with COVID-19 literature and avoid retractions

COVID-19 continues to affect lives and lifestyles across the globe in deadly and tangible ways. Although rapid retractions and insightful retraction notices are a good start in correcting erroneous COVID-19 literature, greater structural reform, fortified peer review and stricter editorial handling are needed for full and thorough accountability.

Fortification of the publishing process moving forward is needed. Trust and transparency need to increase and would benefit if the datasets, peer review reports, editorial comments and decisions, and authors’ responses, to all COVID-19 papers were released publicly, i.e., an OD policy (Shuja et al. 2020).Footnote 11

Rigorous peer review is needed as a safeguard for reducing the risk of publishing research which does not provide new knowledge, or that has flaws (Drummond 2016). Peers without biases and self-interest should be selected to review and accept research that adds new knowledge and builds on existing literature, including replication studies (Csiszar 2016; Mavrogenis et al. 2020). Existing knowledge is assumed as the status quo, unless disputed by new evidence. Scientists should take a conservative approach when reviewing a paper and should tentatively assume, before reviewing, that the manuscript does not contribute to new knowledge, unless the evidence provided is sufficiently robust to disprove this assumption. In such a case, a false null hypothesis is rejected and manuscript is published. However, there is still a chance that an unsound paper is published since a type I error might occur (Ioannidis 2005). The peer recommends acceptance of a flawed manuscript. However, there is also a type II error, which is to not reject the null hypothesis that a paper does not contribute to new knowledge when actually it does. In this case, the peer recommends rejection of a good (i.e., scientifically sound) manuscript. This happens especially in journals where the acceptance rate is very low (Björk 2019). Peer review is supposed to reduce type I errors (i.e., by reducing the chances of accepting manuscripts that are flawed). Unfortunately, this increases the chances of a type II error, namely the likelihood that a good paper is not published. There is thus a trade-off between the likelihood of accepting an unsound manuscript and the likelihood of rejecting a good manuscript. Minimizing the likelihood of a type I error will lead to very few papers being accepted for publication but this also results in numerous good papers being rejected for publication (Heckman and Moktan 2020). In order to reduce the risk of a type I error, Oller and Shaw (2020) advocated for research related to vaccine safety and analysis to be more rigorous, unbiased and independent. At the Medical Journal of Australia (MJA), select COVID-19 papers of an urgent nature are offered the possibility of an automatic preprint option, but the precise criteria for inclusion or exclusion of this exclusive publishing model are not clearly defined, nor is it clear what advantage there is in offering a preprint option that preprint servers such as bioRxiv or medRxiv do not already offer (Tally 2020).

When making a decision on the rigor of the peer review process, consideration should be given to the costs of a type I error versus that of a type II error as well as adopting a precautionary principle. When the cost of a type I error is high, as it may well be with COVID-19 research on public health, peer review measures should be tightened (Ioannidis 2020). The measures which we recommend below from the date of submission to publication for original research papers are rigorous and would take at least a month or two in order to minimize the risks of making such an error and publishing flawed research and highly speculative knowledge on public health (Bauchner 2017; Bauchner et al. 2020). The trade-off and thus sacrifice would be that some good quality research papers on COVID-19 may be rejected. For editorials, letters, and perspectives, the review process should take no more than two weeks by the editors, i.e., desk rejections should be quick (Teixeira da Silva et al. 2018), and some manuscripts may not require peer reviews as these manuscripts are mostly opinions and hence contain normative statements which can be debated by scholars.

We believe that the following six-step processFootnote 12 would minimize risks of publishing questionable original research on public health research related to COVID-19:

  1. 1.

    Prescreening a paper by the editor-in-chief with simultaneous advice from an expert editor and a statistician, as suggested by the MJA (Tally 2020), within a week. Where analyses are found to be robust, the paper can be preprinted on a journal’s short-list of preprint servers, if the authors so wish. An OD policy should be mandatory.

  2. 2.

    Independent peer review by at least three experts selected by the journal’s editors within three weeks, including full access to raw data. Absent the presentation of raw data for underlying analyses if requested by reviewers, authors should be subject to an ethics investigation.

  3. 3.

    Ideally, an open peer review policy should be in place, but this should be mandatory for all authors, rather than optional, and thus applicable for only some.

  4. 4.

    There should be no immediate overnight acceptance of any paper on COVID-19 research, to avoid an association with predatory publishing practices. Authors would have 1–2 weeks to complete minor revisions and 3–4 weeks for major revisions, with another 1–2 weeks for each additional revision. In contrast, Eisen et al. (2020) advocate for a no-time limit on revisions. Reviewers should examine edits within a week and make a decision to reject, accept, or make additional edits.

  5. 5.

    The editorial decision should be based on reviewers’ recommendations, and acceptance should only be with the unanimous approval of all three reviewers. Doubt should be minimized to minimize risks of a type I error occurring. Unanimity will most likely ensure that no more than four retractions for every 10,000 articles, i.e., a retraction rate of 0.04%, happens (Brainard and You 2018).

  6. 6.

    Processing a manuscript to publication should take, at most, another 2 weeks, and should include open peer review reports, authors’ responses and editorial comments and decisions.


This paper argues that the volume of research being conducted may be placing unprecedented strain on publishers’ online submission systems, peer reviewers and editors, many of whom are struggling to deal with the sheer mass of submissions while simultaneously having to deal with the health, emotional and psychological pressures associated with the changes that this pandemic has imposed on society as a whole. With such large volumes of information available, the publishing system is somewhat overwhelmed and the selection of pertinent literature can be challenging (Brainard 2020). Handling a large volume of submissions in order to release potentially valuable information to the public, to either combat the virus, or to raise awareness, may result in a lapse in quality if there is peer review or editorial oversight (Bauchner et al. 2020; Chirico et al. 2020; Palayew et al. 2020). Authors, peer reviewers, editors, publishers, and media that cover published COVID-19-related research, as well as the public, face considerable challenges in the months ahead. The US academic research enterprise has been massively impacted, including educational and reseach disruption (Radecki and Schonfeld 2020), and this is likely a similar scenario around the globe. As COVID-19-related literature continues to growFootnote 13,Footnote 14 the inability to effectively deal with the corrections of errors may fragilize academic publishing, inducing not only a health crisis, but a publishing crisis (Bell and Green 2020). Only time will tell whether the mistakes in papers caused by these publishing weaknesses, as well as frailties in society and healthcare caused by COVID-19 in the past few months, will lead to the emergence of a more robust, or a more fragilized, publishing landscape.Footnote 15


  1. 1., archived at: (last accessed: November 4, 2020).

  2. 2.

    The server for the Surgisphere website ( has been suspended (last accessed: June 7, 2020).

  3. 3.

    Currently, Google Scholar does not indicate if a paper is retracted or has an expression of concern, a status that can only be gleaned if one clicks the links. We recommend that Google Scholar add a publicly visible label to indicate this status.

  4. 4. (June 4, 2020; last accessed: November 4, 2020).

  5. 5. (June 17, 2020; last accessed: November 4, 2020).

  6. 6. (Public Health Emergency COVID-19 Initiative; last accessed: November 4, 2020).

  7. 7. (June 5, 2020; last accessed: November 4, 2020).

  8. 8. (last accessed: November 4, 2020).

  9. 9. (last accessed: November 4, 2020).

  10. 10. (March 26, 2020; last accessed: November 4, 2020).

  11. 11. (January 31, 2020; last accessed: November 4, 2020).

  12. 12.

    We recognize that this is simply a representation of an “ideal” solution, but that, in reality, there are likely to be weaknesses in each of these, due to biases and human failure. For example, it is reasonable to expect that authors, peer reviewers and editors, as well as publishing staff, are in some way, being impacted by COVID-19, physically, emotionally, or psychologically, thus potentially lowering the optimization of any one of these processed..

  13. 13. 67,172 papers until November 1, 2020 (last accessed: November 4, 2020).

  14. 14. 76,830 papers from January 21 to November 2, 2020 (last accessed: November 4, 2020).

  15. 15. (October 19, 2020; last accessed: November 4, 2020).


  1. Abritis, A., A. Marcus, and I. Oransky. 2020. An “alarming” and “exceptionally high” rate of COVID-19 retractions? Accountability in Research.

    Article  Google Scholar 

  2. Barakat, A.F., M. Shokr, J. Ibrahim, J. Mandrola, and I.Y. Elgendy. 2020. Timeline from receipt to online publication of COVID-19 original research articles. medRxiv.

    Article  Google Scholar 

  3. Bauchner, H. 2017. The rush to publication: an editorial and scientific mistake. Journal of the American Medical Association 318 (12): 1109–1110.

    Article  Google Scholar 

  4. Bauchner, H., P.B. Fontanarosa, and R.M. Golub. 2020. Editorial evaluation and peer review during a pandemic: how journals maintain standards. Journal of the American Medical Association 324 (5): 453–454.

    Article  Google Scholar 

  5. Bell, K., and J. Green. 2020. Premature evaluation? Some cautionary thoughts on global pandemics and scholarly publishing. Critical Public Health 30 (4): 379–383.

    Article  Google Scholar 

  6. Björk, B.-C. 2019. Acceptance rates of scholarly peer-reviewed journals: a literature survey. El Profesional de la Información 28 (4): e280407.

    Article  Google Scholar 

  7. Boulware, D.R., M.F. Pullen, A.S. Bangdiwala, et al. 2020. A randomized trial of hydroxychloroquine as postexposure prophylaxis for Covid-19. New England Journal of Medicine 383: 517–525.

    Article  Google Scholar 

  8. Brainard, J. 2020. Scientists are drowning in COVID-19 papers. Can new tools keep them afloat? Science.

    Article  Google Scholar 

  9. Brainard, J., and J. You. 2018. What a massive database of retracted papers reveals about science publishing’s ‘death penalty.’ Science 25 (1): 1–5.

    Article  Google Scholar 

  10. Chirico, F., J.A. Teixeira da Silva, and N. Magnavita. 2020. “Questionable” peer review in the publishing pandemic during the time of Covid-19: implications for policy makers and stakeholders. Croatian Medical Journal 61 (3): 300–301.

    Article  Google Scholar 

  11. Csiszar, A. 2016. Peer review: Troubled from the start. Nature 532 (7599): 306–308.

    Article  Google Scholar 

  12. Di Girolamo, N., and R.M. Reynders. 2020. Characteristics of scientific articles on COVID-19 published during the initial 3 months of the pandemic. Scientometrics 125 (1): 795–812.

    Article  Google Scholar 

  13. Drummond, R. 2016. Let’s make peer review scientific. Nature News 535 (7610): 31.

    Article  Google Scholar 

  14. Eisen, M.B., A. Akhmanova, T.E. Behrens, and D. Weigel. 2020. Publishing in the time of COVID-19. eLife 9: e57162.

    Article  Google Scholar 

  15. Fogel, D.B. 2018. Factors associated with clinical trials that fail and opportunities for improving the likelihood of success: a review. Contemporary Clinical Trials Communications 11: 156–164.

    Article  Google Scholar 

  16. Funck-Brentano, C., L.S. Nguyen, and J.E. Salem. 2020. Retraction and republication: cardiac toxicity of hydroxychloroquine in COVID-19. The Lancet 396 (10245): E2–E3.

    Article  Google Scholar 

  17. Funck-Brentano, C., and J.E. Salem. 2020. Chloroquine or hydroxychloroquine for COVID-19: why might they be hazardous? The Lancet. Retraction:

    Article  Google Scholar 

  18. Heckman, J.J., and S. Moktan. 2020. Publishing and promotion in economics: the tyranny of the top five. Journal of Economic Literature 58 (2): 419–470.

    Article  Google Scholar 

  19. Horbach, S.P.J.M. 2020. Pandemic publishing: medical journals strongly speed up their publication process for Covid-19. Quantitative Science Studies 1 (3): 1056–1067.

    Article  Google Scholar 

  20. Ioannidis, J.P.A. 2005. Why most published research findings are false. PLoS Medicine 2 (8): e124.

    Article  Google Scholar 

  21. Ioannidis, J.P.A. 2020. Coronavirus disease 2019: the harms of exaggerated information and non-evidence-based measures. European Journal of Clinical Investigation 50 (4): e13223.

    Article  Google Scholar 

  22. Jacobsen, K.H., and E.K. Vraga. 2020. Improving communication about COVID-19 and other emerging infectious diseases. European Journal of Clinical Investigation 50 (5): e13225.

    Article  Google Scholar 

  23. Kun, Á. 2020. Time to acceptance of 3 days for papers about COVID-19. Publications 8: 30.

    Article  Google Scholar 

  24. Lakens, D. 2020. Pandemic researchers—recruit your own best critics. Nature 581 (7807): 121.

    Article  Google Scholar 

  25. London, A.J., and J. Kimmelman. 2020. Against pandemic research exceptionalism. Science 368 (6490): 476–477.

    Article  Google Scholar 

  26. Martin, S. 2017. Word-of-mouth in the health care sector: a literature analysis of the current state of research and future perspectives. International Review on Public and Nonprofit Marketing 14 (1): 35–56.

    Article  Google Scholar 

  27. Matias-Guiu, J. (2020). The role of scientific journal editors during the COVID-19 pandemic. Neurologia 35 (4): 223–225 (in Spanish with English abstract).

  28. Mavrogenis, A.F., A. Quaile, and M.M. Scarlat. 2020. The good, the bad and the rude peer-review. International Orthopaedics (SICOT) 44: 413–415.

    Article  Google Scholar 

  29. Mehra, M.R., Desai, S.S., Kuy, S., Henry, T.D., Patel, A.N. (2020a). Cardiovascular disease, drug therapy, and mortality in Covid-19. New England Journal of Medicine.; expression of concern.; retraction

  30. Mehra, M.R., Desai, S.S., Ruschitzka, F., Patel, A.N. (2020b). Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. The Lancet.; erratum; expression of concern; retraction

  31. Oller, J., and C. Shaw. 2020. Brave new world: omens and opportunities in the age of COVID-19. International Journal of Vaccine Theory, Practice, and Research 1 (1): 1–10.

    Google Scholar 

  32. Palayew, A., O. Norgaard, K. Safreed-Harmon, T.H. Andersen, L.N. Rasmussen, and J.V. Lazarus. 2020. Pandemic publishing poses a new COVID-19 challenge. Nature Human Behaviour 4 (7): 666–669.

    Article  Google Scholar 

  33. Radecki, J., Schonfeld, R.C. (2020). The impacts of COVID-19 on the research enterprise. A landscape review. Ithaca S+R research report, October 26, 2020. Accessed 4 Nov, 2020

  34. Sharun, K., Dhama, K., Patel, S.K., Pathak, M., Tiwari, R., Singh, B.R., Sah, R., Aldana, D.K.B., Morales, A.J.R., Leblebicioglu, H. (2020). Ivermectin, a new candidate therapeutic against SARS-CoV-2/COVID-19. Annals of Clinical Microbiology and Antimicrobials 19 (1): article 23.

  35. Shuja, J., E. Alanazi, W. Alasmary, and A. Alashaikh. 2020. COVID-19 open source datasets: a comprehensive survey. Applied Intelligence.

    Article  Google Scholar 

  36. Tally, N.J. 2020. Rapid publishing in the era of coronavirus disease 2019 (COVID-19). Medical Journal of Australia 212 (11): 535–536.

    Article  Google Scholar 

  37. Teixeira da Silva, J.A. 2020. Silently withdrawn or retracted preprints related to Covid-19 are a scholarly threat and a potential public health risk: theoretical arguments and suggested recommendations. Online Information Review.

    Article  Google Scholar 

  38. Teixeira da Silva, J.A. 2020. An alert to COVID-19 literature in predatory publishing venues. The Journal of Academic Librarianship 46 (5): 102187.

    Article  Google Scholar 

  39. Teixeira da Silva, J.A., A. Al-Khatib, V. Katavić, and H. Bornemann-Cimenti. 2018. Establishing sensible and practical guidelines for desk rejections. Science and Engineering Ethics 24 (4): 1347–1365.

    Article  Google Scholar 

  40. Teixeira da Silva, J.A., and J. Dobránszki. 2018. Citing retracted papers affects education and librarianship, so distorted academic metrics need a correction. Journal of Librarianship and Scholarly Communication 6: eP2199.

    Article  Google Scholar 

  41. Teixeira da Silva, J.A., J. Dobránszki, P. Tsigaris, and A. Al-Khatib. 2019. Predatory and exploitative behaviour in academic publishing: an assessment. The Journal of Academic Librarianship 45 (6): 102071.

    Article  Google Scholar 

  42. Teixeira da Silva, J.A., P. Tsigaris, and M.A. Erfanmanesh. 2020. Publishing volumes in major databases related to Covid-19. Scientometrics.

    Article  Google Scholar 

  43. Toth, J. (2020). Reasons to decline an invitation to peer review during the Coronavirus (COVID-19) outbreak—Are there implications for journal policy? KOME, 8 (1): 1–6.

  44. Wallis, L.A. (2020). COVID-19 severity scoring tool for low resourced settings. African Journal of Emergency Medicine.; retraction and republication

  45. Yu, Y., Q.-L. Shi, P. Zheng, L. Gao, H.-Y. Li, P.-X. Tao, B.-H. Gu, D.-F. Wang, and H. Chen. 2020. Assessment of the quality of systematic reviews on COVID-19: a comparative study of previous coronavirus outbreaks. Journal of Medical Virology 92 (7): 883–890.

    Article  Google Scholar 

  46. Zhuang, G-H., Shen, M-W., Zeng, L-X., Mi, B-B., Chen, F-Y., Liu, W-J., Pei, L-L., Qi, X, Li, C. (2020). Potential false-positive rate among the 'asymptomatic infected individuals' in close contacts of COVID-19 patients. Zhonghua Liu Xing Bing Xue Za Zhi (Chinese Journal of Epidemiology) 41 (4): 485–488. (in Chinese)

Download references


This research received no external funding.

Author information




Conceptualization, formal analysis, investigation, resources, writing—original draft preparation, writing—review and editing, visualization, supervision, all three authors. All authors have read and agreed to the published version of the manuscript.

Corresponding authors

Correspondence to Jaime A. Teixeira da Silva, Helmar Bornemann-Cimenti or Panagiotis Tsigaris.

Ethics declarations

Conflict of interest

The authors declare no conflicts of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Teixeira da Silva, J.A., Bornemann-Cimenti, H. & Tsigaris, P. Optimizing peer review to minimize the risk of retracting COVID-19-related literature. Med Health Care and Philos 24, 21–26 (2021).

Download citation


  • Academic quality
  • Correction
  • Public health risk
  • Retraction
  • Type I and II errors
  • Withdrawal