, Volume 94, Issue 2, pp 701–709 | Cite as

Positive results receive more citations, but only in some disciplines

  • Daniele FanelliEmail author


Negative results are commonly assumed to attract fewer readers and citations, which would explain why journals in most disciplines tend to publish too many positive and statistically significant findings. This study verified this assumption by counting the citation frequencies of papers that, having declared to “test” a hypothesis, reported a “positive” (full or partial) or a “negative” (null or negative) support. Controlling for various confounders, positive results were cited on average 32 % more often. The citation advantage, however, was unequally distributed across disciplines (classified as in the Essential Science Indicators database). Using Space Science as the reference category, the citation differential was positive and formally statistically significant only in Neuroscience & Behaviour, Molecular Biology & Genetics, Clinical Medicine, and Plant and Animal Science. Overall, the effect was significantly higher amongst applied disciplines, and in the biological compared to the physical and the social sciences. The citation differential was not a significant predictor of the actual frequency of positive results amongst the 20 broad disciplines considered. Although future studies should attempt more fine-grained assessments, these results suggest that publication bias may have different causes and require different solutions depending on the field considered.


Bias File-drawer Citations Competition Publication Research evaluation 



This work was supported by a Marie Curie Intra-European Fellowship (Grant Agreement Number PIEF-GA-2008-221441) and a Leverhulme Early-Career fellowship (ECF/2010/0131).


  1. Anderson, M. S., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). The perverse effects of competition on scientists’ work and relationships. Science and Engineering Ethics, 13(4), 437–461.CrossRefGoogle Scholar
  2. Browman, H. I. (1999). The uncertain position, status and impact of negative results in marine ecology:Philosphical and practical considerations. Marine Ecology Progress Series, 191, 301–309.CrossRefGoogle Scholar
  3. Csada, R. D., James, P. C., & Espie, R. H. M. (1996). The ‘‘file drawer problem’’ of non-significant results: Does it apply to biological research? Oikos, 76(3), 591–593.CrossRefGoogle Scholar
  4. Doucouliagos, H., Laroche, P., & Stanley, T. D. (2005). Publication bias in union-productivity research? Relations Industrielles-Industrial Relations, 60(2), 320–347.Google Scholar
  5. Dwan, K., Altman, D. G., Arnaiz, J. A., Bloom, J., Chan, A.-W., Cronin, E., et al. (2008). Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE, 3(8), e3081.CrossRefGoogle Scholar
  6. Fanelli, D. (2010a). Do pressures to publish increase scientists’ bias? An Empirical Support from US States Data. PLoS ONE, 5(4). doi: 10.1371/journal.pone.0010271.
  7. Fanelli, D. (2010b). “Positive” results increase down the hierarchy of the sciences. PLoS ONE, 5(3). doi: 10.1371/journal.pone.0010068.
  8. Fanelli, D. (2011). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891–904. doi: 10.1007/s11192-011-0494-7.MathSciNetCrossRefGoogle Scholar
  9. Gerber, A. S., & Malhotra, N. (2008). Publication bias in empirical sociological research––Do arbitrary significance levels distort published results? Sociological Methods & Research, 37(1), 3–30.MathSciNetCrossRefGoogle Scholar
  10. Greenberg, S. A. (2009). How citation distortions create unfounded authority: analysis of a citation network. British Medical Journal, 339. doi: 10.1136/bmj.b2680.
  11. Howard, G. S., Hill, T. L., Maxwell, S. E., Baptista, T. M., Farias, M. H., Coelho, C., et al. (2009). What’s wrong with research literatures? and how to make them right. Review of General Psychology, 13(2), 146–166.CrossRefGoogle Scholar
  12. Ioannidis, J. P. A. (2011). Excess significance bias in the literature on brain volume abnormalities. Archives of General Psychiatry, 68(8), 773–780. doi: 10.1001/archgenpsychiatry.2011.28.CrossRefGoogle Scholar
  13. Jennions, M. D., & Moller, A. P. (2002). Publication bias in ecology and evolution: An empirical assessment using the ‘trim and fill’ method. Biological Reviews, 77(2), 211–222.CrossRefGoogle Scholar
  14. Knight, J. (2003). Negative results: null and void. Nature, 422(6932), 554–555.CrossRefGoogle Scholar
  15. Kundoor, V., & Ahmed, M. K. K. (2010). Uncovering negative results: Introducing an open access journal “Journal of Pharmaceutical Negative Results”. Pharmacognosy Magazine, 6(24), 345–347. doi: 10.4103/0973-1296.71783.Google Scholar
  16. LeBel, E. P., & Peters, K. R. (2011). Fearing the future of empirical psychology: Bem’s (2011) evidence of psi as a case study of deficiencies in modal research practice. Review of General Psychology, 15(4), 371–379. doi: 10.1037/a0025172.CrossRefGoogle Scholar
  17. Leimu, R., & Koricheva, J. (2005). What determines the citation frequency of ecological papers? [Editorial Material]. Trends in Ecology & Evolution, 20(1), 28–32. doi: 10.1016/j.tree.2004.10.010.CrossRefGoogle Scholar
  18. O’Hara, B. (2011). Negative results are published. Nature, 471(7339), 448–449. doi: 10.1038/471448e.CrossRefGoogle Scholar
  19. Padial, A. A., Nabout, J. C., Siqueira, T., Bini, L. M., & Diniz, J. A. F. (2010). Weak evidence for determinants of citation frequency in ecological articles. Scientometrics, 85(1), 1–12. doi: 10.1007/s11192-010-0231-7.CrossRefGoogle Scholar
  20. Pautasso, M. (2010). Worsening file-drawer problem in the abstracts of natural, medical and social science databases. Scientometrics, 85(1), 193–202. doi: 10.1007/s11192-010-0233-5.CrossRefGoogle Scholar
  21. Sandercock, P. (2012). Negative results: why do they need to be published? International Journal of Stroke, 7(1), 32–33. doi: 10.1111/j.1747-4949.2011.00723.x.CrossRefGoogle Scholar
  22. Schooler, J. (2011). Unpublished results hide the decline effect. Nature, 470(7335), 437. doi: 10.1038/470437a.CrossRefGoogle Scholar
  23. Silvertown, J., & McConway, K. J. (1997). Does ‘‘publication bias’’’ lead to biased science? Oikos, 79(1), 167–168.CrossRefGoogle Scholar
  24. Song, F., Parekh, S., Hooper, L., Loke, Y. K., Ryder, J., Sutton, A. J., et al. (2010). Dissemination and publication of research findings: An updated review of related biases. Health Technology Assessment, 14(8). doi: 10.3310/hta14080.
  25. Vogeli, C., Yucel, R., Bendavid, E., Jones, L. M., Anderson, M. S., Louis, K. S., et al. (2006). Data withholding and the next generation of scientists: Results of a national survey. Academic Medicine, 81(2), 128–136.CrossRefGoogle Scholar
  26. Walters, G. D. (2006). Predicting subsequent citations to articles published in twelve crime-psychology journals: Author impact versus journal impact. Scientometrics, 69(3), 499–510. doi: 10.1007/s11192-006-0166-1.MathSciNetCrossRefGoogle Scholar
  27. Young, N. S., Ioannidis, J. P. A., & Al-Ubaydi, O. (2008). Why current publication practices may distort science. PLoS Medicine, 5(10), 1418–1422. doi: 10.1371/journal.pmed.0050201.Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2012

Authors and Affiliations

  1. 1.ISSTI-Institute for the Study of Science, Technology and InnovationThe University of EdinburghEdinburghUK

Personalised recommendations