Worsening file-drawer problem in the abstracts of natural, medical and social science databases
- 631 Downloads
The file-drawer problem is the tendency of journals to preferentially publish studies with statistically significant results. The problem is an old one and has been documented in various fields, but to my best knowledge there has not been attention to how the issue is developing in a quantitative way through time. In the abstracts of various major scholarly databases (Science and Social Science Citation Index (1991–2008), CAB Abstracts and Medline (1970s–2008), the file drawer problem is gradually getting worse, in spite of an increase in (1) the total number of publications and (2) the proportion of publications reporting both the presence and the absence of significant differences. The trend is confirmed for particular natural science topics such as biology, energy and environment but not for papers retrieved with the keywords biodiversity, chemistry, computer, engineering, genetics, psychology and quantum (physics). A worsening file-drawer problem can be detected in various medical fields (infection, immunology, malaria, obesity, oncology and pharmacology), but not for papers indexed with strings such as AIDS/HIV, epidemiology, health and neurology. An increase in the selective publication of some results against some others is worrying because it can lead to enhanced bias in meta-analysis and hence to a distorted picture of the evidence for or against a certain hypothesis. Long-term monitoring of the file-drawer problem is needed to ensure a sustainable and reliable production of (peer-reviewed) scientific knowledge.
KeywordsHistory of science Meta-analysis Publication explosion Scientific knowledge Significant differences STM publishing
Many thanks to L. Ambrosino, R. Brown, T. Hirsch, O. Holdenrieder, M. Jeger, C. Pautasso, R. Russo and H. Schäfer for insight, discussion or support and to I. Cuthill, O. Holdenrieder, T. Matoni, P. Vineis, K. West and anonymous reviewers for helpful comments on a previous draft.
- Cicchetti, D. V. (1991). The reliability of peer-review for manuscript and grant submissions: A cross-disciplinary investigation. Behavioral and Brain Sciences, 14, 119–134.Google Scholar
- de Mesnard, L. (2010). On Hochberg et al.’s “The tragedy of the reviewer commons”. Scientometrics, in press doi: 10.1007/s11192-009-0141-8.
- Garfield, E. (1997). A statistically valid definition of bias is needed to determine whether the Science Citation Index(R) discriminates against third world journals. Current Science, 73, 639–641.Google Scholar
- Khoury, M. J., Bertram, L., Boffetta, P., Butterworth, A. S., Chanock, S. J., Dolan, S. M., et al. (2009). Genome-wide association studies, field synopses, and the development of the knowledge base on genetic variation and human diseases. American Journal of Epidemiology, 170, 269–279.CrossRefGoogle Scholar
- Koletsi, D., Karagianni, A., Pandis, N., Makou, M., Polychronopolou, A., & Eliades, T. (2009). Are studies reporting significant results more likely to be published? American Journal of Orthodontics and Dentofacial Orthopedics, 136, 632e1.Google Scholar
- Pautasso, M., & Schäfer, H. (2010). Peer review delay and selectivity in ecology journals. Scientometrics, in press. doi: 10.1007/s11192-009-0105-z.
- Provenzale, J. M., & Stanley, R. J. (2005). A systematic guide to reviewing a manuscript. American Journal of Radiology, 185, 848–854.Google Scholar
- Smith, A. J. (1990). The task of the referee. IEEE Computer, 23, 46–51.Google Scholar