Advertisement

European Journal of Epidemiology

, Volume 33, Issue 11, pp 1025–1031 | Cite as

Case study in major quotation errors: a critical commentary on the Newcastle–Ottawa scale

  • Andreas Stang
  • Stephan Jonas
  • Charles Poole
REVIEW

Abstract

The Newcastle-Ottawa scale (NOS) is one of many scales used to judge the quality of observational studies in systematic reviews. It was criticized for its arbitrary definitions of quality items in a commentary in 2010 in this journal. That commentary was cited 1,250 times through December 2016. We examined the citation history of this commentary in a random sample of 100 full papers citing it, according to the Web of Science. Of these, 96 were systematic reviews, none of which quoted the commentary directly. All but 2 of the 96 indirect quotations (98%) portrayed the commentary as supporting use of the NOS in systematic reviews when, in fact, the opposite was the case. It appears that the vast majority of systematic review authors who cited this commentary did not read it. Journal reviewers and editors did not recognize and correct these major quotation errors. Authors should read each source they cite to make sure their direct and indirect quotations are accurate. Reviewers and editors should do a better job of checking citations and quotations for accuracy. It might help somewhat for commentaries to include abstracts, so that the basic content can be conveyed by PubMed and other bibliographic resources.

Notes

Acknowledgements

This work was supported by the German Federal Ministry of Education and Science (BMBF) [Grant no. 01ER1704]. The funding source had no role in the study design, in the collection, analysis and interpretation of data, in the writing of the report, and in the decision to submit the paper for publication.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Wells GA, Shea B, O’Connell D, Peterson J, Welch V, Losos M, et al. The Newcastle–Ottawa Scale (NOS) for assessing the quality if nonrandomized studies in meta-analyses. http://www.ohrica/programs/clinical_epidemiology/oxfordasp. 2009.
  2. 2.
    Reeves BC, Deeks JJ, Higgins JPT, Wells GA. Chapter 13: Including non-randomized studies. In: Higgins JBT, Green S, editors. Cochrane handbook of systematic reviews of interventions, version 510 (updated March 2011). www.handbook.cochrane.org: The cochrane collaboration. 2011.
  3. 3.
    Deeks JJ, Dinnes J, D’Amico R, Sowden AJ, Sakarovitch C, Song F, et al. Evaluating non-randomised intervention studies. Health Technol Assess. 2003;7:iii–173.CrossRefGoogle Scholar
  4. 4.
    Stang A. Critical evaluation of the Newcastle–Ottawa scale for the assessment of the quality of nonrandomized studies in meta-analyses. Eur J Epidemiol. 2010;25:603–5.CrossRefGoogle Scholar
  5. 5.
    Lee SY, Lee JS. A survey of quotation accuracy in two Korean dermatological journals. Ann Dermatol. 1995;7:236–9.CrossRefGoogle Scholar
  6. 6.
    de Lacey G, Record C, Wade J. How accurate are quotations and references in medical journals? Br Med J (Clin Res Ed). 1985;291:884–6.CrossRefGoogle Scholar
  7. 7.
    Eichorn P, Yankauer A. Do authors check their references? A survey of accuracy of references in three public health journals. Am J Public Health. 1987;77:1011–2.CrossRefGoogle Scholar
  8. 8.
    Evans JT, Nadjari HI, Burchell SA. Quotational and reference accuracy in surgical journals. A continuing peer review problem. JAMA. 1990;263:1353–4.CrossRefGoogle Scholar
  9. 9.
    Tfelt-Hansen P. The qualitative problem of major quotation errors, as illustrated by 10 different examples in the headache literature. Headache. 2015;55:419–26.CrossRefGoogle Scholar
  10. 10.
    Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. 2009;62(10):e1–34.  https://doi.org/10.1016/j.jclinepi.2009.06.006.CrossRefPubMedGoogle Scholar
  11. 11.
    DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials. 1986;7(3):177–88.CrossRefGoogle Scholar
  12. 12.
    Deeks JJ, Higgins JPT, Altman DG. Chapter 9: Analysing data and undertaking meta-analyses. In: Higgins JPT, Green S, editors. Cochrane handbook of systematic reviews of interventions, version 510 (updated March 2011). www.handbook.cochrane.org: The cochrane collaboration. 2011.
  13. 13.
    Garvin DA. What does product quality really mean? Sloan Manag Rev. 1984;26:25–43.Google Scholar
  14. 14.
    Dickie G. Aesthetics: an introduction. New York: The Bobbs-Merrill Company, Inc.; 1971.Google Scholar
  15. 15.
    On Gerwitz P. I know it when I see it. Yale Law J. 1996;105:1023–47.CrossRefGoogle Scholar
  16. 16.
    Crosby PB. Quality is free. New York: McGraw-Hill; 1979.Google Scholar
  17. 17.
    Broh RA. Managing quality for higher profits. New York: McGraw-Hill; 1982.Google Scholar
  18. 18.
    Kuehn AA, Day RL. Strategy of product quality. Harv Bus Rev. 1962;40:100–10.Google Scholar
  19. 19.
    Rodgers A, MacMahon S. Systematic underestimation of treatment effects as a result of diagnostic test inaccuracy: implications for the interpretation and design of thromboprophylaxis trials. Thromb Haemost. 1995;73:167–71.PubMedGoogle Scholar
  20. 20.
    Higgins JPT, Altman DG, Sterne JAC. Chapter 8: Assessing risk of bias in included studies. Cochrane handbook for systematic reviews of interventions, version 510 (updated March 2011). www.handbook.cochrane.de: The cochrane collaboration. 2011.
  21. 21.
    Greenland S. Quality scores are useless and potentially misleading—Reply to Re—a critical-look at some popular analytic methods. Am J Epidemiol. 1994;140:300–1.CrossRefGoogle Scholar
  22. 22.
    Greenland S, O’Rourke K. On the bias produced by quality scores in meta-analysis, and a hierarchical view of proposed solutions. Biostatistics. 2001;2:463–71.CrossRefGoogle Scholar
  23. 23.
    Donabedian A. Evaluating the quality of medical care. Milbank Q. 1966;44:166–203.CrossRefGoogle Scholar
  24. 24.
    Juni P, Witschi A, Bloch R, Egger M. The hazards of scoring the quality of clinical trials for meta-analysis. JAMA. 1999;282:1054–60.CrossRefGoogle Scholar
  25. 25.
    Rubin DR. Meta-analysis: literature synthesis or effect-size surface estimation? J Educ Stat. 1992;17:363–74.CrossRefGoogle Scholar
  26. 26.
    Lash TL, Fox MP, Fink AK. Applying quantitative bias analysis to epidemiologic data. Dordrecht: Springer; 2009.CrossRefGoogle Scholar
  27. 27.
    Ebert CS Jr, Drake AF. The impact of sleep-disordered breathing on cognition and behavior in children: a review and meta-synthesis of the literature. Otolaryngol Head Neck Surg. 2004;131:814–26.CrossRefGoogle Scholar

Copyright information

© Springer Nature B.V. 2018

Authors and Affiliations

  1. 1.Director of the Center of Clinical EpidemiologyInstitute of Medical Informatics, Biometry and Epidemiology, University Hospital of EssenEssenGermany
  2. 2.Department of EpidemiologyBoston University School of Public HealthBostonUSA
  3. 3.Department of Medical InformaticsRWTH Aachen UniversityAachenGermany
  4. 4.Department of EpidemiologyGillings School of Global Public Health, University of North CarolinaChapel HillUSA

Personalised recommendations