Scientometrics

, Volume 90, Issue 3, pp 891–904 | Cite as

Negative results are disappearing from most disciplines and countries

Article

Abstract

Concerns that the growing competition for funding and citations might distort science are frequently discussed, but have not been verified directly. Of the hypothesized problems, perhaps the most worrying is a worsening of positive-outcome bias. A system that disfavours negative results not only distorts the scientific literature directly, but might also discourage high-risk projects and pressure scientists to fabricate and falsify their data. This study analysed over 4,600 papers published in all disciplines between 1990 and 2007, measuring the frequency of papers that, having declared to have “tested” a hypothesis, reported a positive support for it. The overall frequency of positive supports has grown by over 22% between 1990 and 2007, with significant differences between disciplines and countries. The increase was stronger in the social and some biomedical disciplines. The United States had published, over the years, significantly fewer positive results than Asian countries (and particularly Japan) but more than European countries (and in particular the United Kingdom). Methodological artefacts cannot explain away these patterns, which support the hypotheses that research is becoming less pioneering and/or that the objectivity with which results are produced and published is decreasing.

Keywords

Bias Misconduct Research evaluation Publication Publish or perish Competition 

Notes

Acknowledgments

Robin Williams gave helpful comments, and François Briatte crosschecked the coding protocol. This work was supported by a Marie Curie Intra-European Fellowship (Grant Agreement Number PIEF-GA-2008-221441) and a Leverhulme Early-Career fellowship (ECF/2010/0131).

References

  1. Atkin, P. A. (2002). A paradigm shift in the medical literature. British Medical Journal, 325(7378), 1450–1451.CrossRefGoogle Scholar
  2. Bian, Z. X., & Wu, T. X. (2010). Legislation for trial registration and data transparency. Trials, 11, 64. doi:10.1186/1745-6215-11-64.CrossRefGoogle Scholar
  3. Bonitz, M., & Scharnhorst, A. (2001). Competition in science and the Matthew core journals. Scientometrics, 51(1), 37–54.CrossRefGoogle Scholar
  4. Browman, H. I. (1999). The uncertain position, status and impact of negative results in marine ecology: Philosphical and practical considerations. Marine Ecology Progress Series, 191, 301–309.CrossRefGoogle Scholar
  5. Csada, R. D., James, P. C., & Espie, R. H. M. (1996). The “file drawer problem” of non-significant results: Does it apply to biological research? Oikos, 76(3), 591–593.CrossRefGoogle Scholar
  6. de Meis, L., Velloso, A., Lannes, D., Carmo, M. S., & de Meis, C. (2003). The growing competition in Brazilian science: Rites of passage, stress and burnout. Brazilian Journal of Medical and Biological Research, 36(9), 1135–1141.CrossRefGoogle Scholar
  7. De Rond, M., & Miller, A. N. (2005). Publish or perish—Bane or boon of academic life? Journal of Management Inquiry, 14(4), 321–329. doi:10.1177/1056492605276850.CrossRefGoogle Scholar
  8. Delong, J. B., & Lang, K. (1992). Are all economic hypotheses false. Journal of Political Economy, 100(6), 1257–1272.CrossRefGoogle Scholar
  9. Doucouliagos, H., Laroche, P., & Stanley, T. D. (2005). Publication bias in union-productivity research? Relations Industrielles-Industrial Relations, 60(2), 320–347.Google Scholar
  10. Dwan, K., Altman, D. G., Arnaiz, J. A., Bloom, J., Chan, A.-W., Cronin, E., et al. (2008). Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS ONE, 3(8), e3081. [Research Support, Non-U.S. Gov’t; Review].CrossRefGoogle Scholar
  11. Evanschitzky, H., Baumgarth, C., Hubbard, R., & Armstrong, J. S. (2007). Replication research’s disturbing trend. Journal of Business Research, 60(4), 411–415. doi:10.1016/j.jbusres.2006.12.003.CrossRefGoogle Scholar
  12. Fanelli, D. (2010a). Do pressures to publish increase scientists’ bias? An empirical support from US States Data. Plos One, 5(4), e10271. doi:10.1371/journal.pone.0010271.CrossRefGoogle Scholar
  13. Fanelli, D. (2010b). “Positive” results increase down the hierarchy of the sciences. Plos One, 5(3), e10068. doi:10.1371/journal.pone.0010068.MathSciNetCrossRefGoogle Scholar
  14. Feigenbaum, S., & Levy, D. M. (1996). Research bias: Some preliminary findings. Knowledge and Policy: The International Journal of Knowledge Transfer and Utilization, 9(2 & 3), 135–142.Google Scholar
  15. Formann, A. K. (2008). Estimating the proportion of studies missing for meta-analysis due to publication bias. Contemporary Clinical Trials, 29(5), 732–739. doi:10.1016/j.cct.2008.05.004.CrossRefGoogle Scholar
  16. Fronczak, P., Fronczak, A., & Holyst, J. A. (2007). Analysis of scientific productivity using maximum entropy principle and fluctuation-dissipation theorem. Physical Review E, 75(2), 026103. doi:10.1103/PhysRevE.75.026103.CrossRefGoogle Scholar
  17. Gad-el-Hak, M. (2004). Publish or perish—An ailing enterprise? Physics Today, 57(3), 61–62.CrossRefGoogle Scholar
  18. Gerber, A. S., & Malhotra, N. (2008). Publication bias in empirical sociological research—Do arbitrary significance levels distort published results? Sociological Methods & Research, 37(1), 3–30.MathSciNetCrossRefGoogle Scholar
  19. Howard, G. S., Hill, T. L., Maxwell, S. E., Baptista, T. M., Farias, M. H., Coelho, C., et al. (2009). What’s wrong with research literatures? And how to make them right. Review of General Psychology, 13(2), 146–166.CrossRefGoogle Scholar
  20. Hubbard, R., & Vetter, D. E. (1996). An empirical comparison of published replication research in accounting, economics, finance, management, and marketing. Journal of Business Research, 35(2), 153–164.CrossRefGoogle Scholar
  21. Ioannidis, J. P. A. (2005). Why most published research findings are false. Plos Medicine, 2(8), 696–701.CrossRefGoogle Scholar
  22. Ioannidis, J. P. A. (2006). Evolution and translation of research findings: From to where? Plos Clinical Trials, 1, e36. doi:10.1371/journal.pctr.0010036.CrossRefGoogle Scholar
  23. Ioannidis, J. P. A. (2008a). Perfect study, poor evidence: Interpretation of biases preceding study design. Seminars in Hematology, 45(3), 160–166.MathSciNetCrossRefGoogle Scholar
  24. Ioannidis, J. P. A. (2008b). Why most discovered true associations are inflated. Epidemiology, 19(5), 640–648.CrossRefGoogle Scholar
  25. Ioannidis, J. P. A., Ntzani, E. E., Trikalinos, T. A., & Contopoulos-Ioannidis, D. G. (2001). Replication validity of genetic association studies. Nature Genetics, 29(3), 306–309.CrossRefGoogle Scholar
  26. Ioannidis, J. P. A., & Trikalinos, T. A. (2005). Early extreme contradictory estimates may appear in published research: The proteus phenomenon in molecular genetics research and randomized trials. Journal of Clinical Epidemiology, 58(6), 543–549.CrossRefGoogle Scholar
  27. Jeng, M. (2006). A selected history of expectation bias in physics. American Journal of Physics, 74(7), 578–583.CrossRefGoogle Scholar
  28. Jennions, M. D., & Moller, A. P. (2002). Publication bias in ecology and evolution: An empirical assessment using the ‘trim and fill’ method. Biological Reviews, 77(2), 211–222.CrossRefGoogle Scholar
  29. Jennions, M. D., & Moller, A. P. (2003). A survey of the statistical power of research in behavioral ecology and animal behavior. Behavioral Ecology, 14(3), 438–445.CrossRefGoogle Scholar
  30. Jones, K. S., Derby, P. L., & Schmidlin, E. A. (2010). An investigation of the prevalence of replication research in human factors. Human Factors, 52(5), 586–595. doi:10.1177/0018720810384394.CrossRefGoogle Scholar
  31. Kelly, C. D. (2006). Replicating empirical research in behavioral ecology: How and why it should be done but rarely ever is. Quarterly Review of Biology, 81(3), 221–236.CrossRefGoogle Scholar
  32. King, D. A. (2004). The scientific impact of nations. Nature, 430(6997), 311–316. doi:10.1038/430311a.CrossRefGoogle Scholar
  33. Knight, J. (2003). Negative results: Null and void. Nature, 422(6932), 554–555.CrossRefGoogle Scholar
  34. Kundoor, V., & Ahmed, M. K. K. (2010). Uncovering negative results: Introducing an open access journal “Journal of Pharmaceutical Negative Results”. Pharmacognosy Magazine, 6(24), 345–347. doi:10.4103/0973-1296.71783.Google Scholar
  35. Kyzas, P. A., Denaxa-Kyza, D., & Ioannidis, J. P. A. (2007). Almost all articles on cancer prognostic markers report statistically significant results. European Journal of Cancer, 43(17), 2559–2579.CrossRefGoogle Scholar
  36. Larsen, P. O., & von Ins, M. (2010). The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index. Scientometrics, 84(3), 575–603. doi:10.1007/s11192-010-0202-z.CrossRefGoogle Scholar
  37. Lawrence, P. A. (2003). The politics of publication—Authors, reviewers and editors must act to protect the quality of research. Nature, 422(6929), 259–261. doi:10.1038/422259a.CrossRefGoogle Scholar
  38. Lortie, C. J. (1999). Over-interpretation: Avoiding the stigma of non-significant results. Oikos, 87(1), 183–184.CrossRefGoogle Scholar
  39. Maddock, J. E., & Rossi, J. S. (2001). Statistical power of articles published in three health psychology-related journals. Health Psychology, 20(1), 76–78.CrossRefGoogle Scholar
  40. Marsh, D. M., & Hanlon, T. J. (2007). Seeing what we want to see: Confirmation bias in animal behavior research. Ethology, 113(11), 1089–1098.CrossRefGoogle Scholar
  41. Meho, L. I. (2007). The rise and rise of citation analysis. Physics World, 20(1), 32–36.Google Scholar
  42. Nicolini, C., & Nozza, F. (2008). Objective assessment of scientific performances world-wide. Scientometrics, 76(3), 527–541. doi:10.1007/s11192-007-1786-9.CrossRefGoogle Scholar
  43. Osuna, C., Crux-Castro, L., & Sanz-Menedez, L. (2011). Overturning some assumptions about the effects of evaluation systems on publication performance. Scientometrics, 86, 575–592.CrossRefGoogle Scholar
  44. Palmer, A. R. (2000). Quasireplication and the contract of error: Lessons from sex ratios, heritabilities and fluctuating asymmetry. Annual Review of Ecology and Systematics, 31, 441–480.CrossRefGoogle Scholar
  45. Pautasso, M. (2010). Worsening file-drawer problem in the abstracts of natural, medical and social science databases. Scientometrics, 85(1), 193–202. doi:10.1007/s11192-010-0233-5.CrossRefGoogle Scholar
  46. Qiu, J. (2010). Publish or perish in China. Nature, 463(7278), 142–143. doi:10.1038/463142a.CrossRefGoogle Scholar
  47. Schmidt, S. (2009). Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Review of General Psychology, 13(2), 90–100. doi:10.1037/a0015108.CrossRefGoogle Scholar
  48. Shelton, R. D., Foland, P., & Gorelskyy, R. (2007). Do new SCI journals have a different national bias? Proceedings of ISSI 2007: 11th international conference of the international society for scientometrics and informetrics, Vols I and II (pp. 708–717).Google Scholar
  49. Shelton, R. D., Foland, P., & Gorelskyy, R. (2009). Do new SCI journals have a different national bias? Scientometrics, 79(2), 351–363. doi:10.1007/s11192-009-0423-1.CrossRefGoogle Scholar
  50. Silvertown, J., & McConway, K. J. (1997). Does “publication bias” lead to biased science? Oikos, 79(1), 167–168.CrossRefGoogle Scholar
  51. Simera, I., Moher, D., Hirst, A., Hoey, J., Schulz, K. F., & Altman, D. G. (2010). Transparent and accurate reporting increases reliability, utility, and impact of your research: Reporting guidelines and the EQUATOR Network. Bmc Medicine, 8, 24. doi:10.1186/1741-7015-8-24.CrossRefGoogle Scholar
  52. Song, F., Parekh, S., Hooper, L., Loke, Y. K., Ryder, J., Sutton, A. J., et al. (2010). Dissemination and publication of research findings: An updated review of related biases. Health Technology Assessment, 14(8), 1–193. doi:10.3310/hta14080.Google Scholar
  53. Statzner, B., & Resh, V. H. (2010). Negative changes in the scientific publication process in ecology: Potential causes and consequences. Freshwater Biology, 55(12), 2639–2653. doi:10.1111/j.1365-2427.2010.02484.x.CrossRefGoogle Scholar
  54. Steen, R. G. (2011). Retractions in the scientific literature: Do authors deliberately commit research fraud? Journal of Medical Ethics, 37(2), 113–117.CrossRefGoogle Scholar
  55. Sterling, T. D., Rosenbaum, W. L., & Weinkam, J. J. (1995). Publication decisions revisited—The effect of the outcome of statistical tests on the decision to publish and vice versa. American Statistician, 49(1), 108–112.CrossRefGoogle Scholar
  56. Tsang, E. W. K., & Kwan, K. M. (1999). Replication and theory development in organizational science: A critical realist perspective. Academy of Management Review, 24(4), 759–780.Google Scholar
  57. Warner, J. (2000). A critical review of the application of citation studies to the Research Assessment Exercises. Journal of Information Science, 26(6), 453–459.CrossRefGoogle Scholar
  58. Young, N. S., Ioannidis, J. P. A., & Al-Ubaydi, O. (2008). Why current publication practices may distort science. Plos Medicine, 5(10), 1418–1422. doi:10.1371/journal.pmed.0050201.CrossRefGoogle Scholar
  59. Yousefi-Nooraie, R., Shakiba, B., & Mortaz-Hejri, S. (2006). Country development and manuscript selection bias: A review of published studies. BMC Medical Research Methodology, 6, 37.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2011

Authors and Affiliations

  1. 1.ISSTI-Institute for the Study of Science, Technology and InnovationThe University of EdinburghEdinburghScotland, UK

Personalised recommendations