Negative results are disappearing from most disciplines and countries
- First Online:
- Cite this article as:
- Fanelli, D. Scientometrics (2012) 90: 891. doi:10.1007/s11192-011-0494-7
- 2.3k Views
Concerns that the growing competition for funding and citations might distort science are frequently discussed, but have not been verified directly. Of the hypothesized problems, perhaps the most worrying is a worsening of positive-outcome bias. A system that disfavours negative results not only distorts the scientific literature directly, but might also discourage high-risk projects and pressure scientists to fabricate and falsify their data. This study analysed over 4,600 papers published in all disciplines between 1990 and 2007, measuring the frequency of papers that, having declared to have “tested” a hypothesis, reported a positive support for it. The overall frequency of positive supports has grown by over 22% between 1990 and 2007, with significant differences between disciplines and countries. The increase was stronger in the social and some biomedical disciplines. The United States had published, over the years, significantly fewer positive results than Asian countries (and particularly Japan) but more than European countries (and in particular the United Kingdom). Methodological artefacts cannot explain away these patterns, which support the hypotheses that research is becoming less pioneering and/or that the objectivity with which results are produced and published is decreasing.
KeywordsBiasMisconductResearch evaluationPublicationPublish or perishCompetition
Competition in science is changing, and concerns that this might distort scientific knowledge are openly and commonly discussed (Young et al. 2008; Statzner and Resh 2010). The traditional race for priority of important discoveries is increasingly intertwined with a struggle for limited funding and jobs, the winners of which are determined by measures of performance and impact (Young et al. 2008; Bonitz and Scharnhorst 2001; Statzner and Resh 2010). Individual scientists, research institutions, countries, international organizations, and scientific journals are increasingly evaluated based on the numbers of papers they publish and citations they receive (Shelton et al. 2009; Meho 2007; Nicolini and Nozza 2008; King 2004). From all these levels, therefore, come pressures on researchers to publish frequently and in high-ranking journals (Lawrence 2003). This combination of competition and bibliometric evaluation has a longer history in the United States, but is increasingly adopted across fields and countries as a way to improve productivity and the rational distribution of resources (Warner 2000; Qiu 2010; de Meis et al. 2003; Osuna et al. 2011). How well bibliometric parameters reflect actual scientific quality, however, is controversial, and the effects that this system might have on research practices need to be fully examined (De Rond and Miller 2005; Osuna et al. 2011; Young et al. 2008).
Several possible problems have been hypothesised, including: undue proliferation of publications and atomization of results (Gad-el-Hak 2004; Statzner and Resh 2010); impoverishment of research creativity, favouring “normal” science and predictable outcomes at the expense of pioneering, high-risk studies (De Rond and Miller 2005); growing journal rejection rates and bias against negative and non-significant results (because they attract fewer readers and citations) (Statzner and Resh 2010; Lortie 1999); sensationalism, inflation and over-interpretation of results (Lortie 1999; Atkin 2002; Ioannidis 2008b); increased prevalence of research bias and misconduct (Qiu 2010). Indirect empirical evidence supports at least some of these concerns. The per-capita paper output of scientists has increased, whilst their career duration has decreased over the last 35 years in the physical sciences (Fronczak et al. 2007). Rejection rates of papers have increased in the high-tier journals (Larsen and von Ins 2010; Lawrence 2003). Negative sentences such as “non-significant difference” have decreased in frequency in papers’ abstracts, while catchy expressions such as “paradigm shift” have increased in the titles (Pautasso 2010; Atkin 2002). No study, however, has yet verified directly whether the scientific literature is enduring actual changes in content.
One of the most worrying distortions that scientific knowledge might endure is the loss of negative data. Results that do not confirm expectations—because they yield an effect that is either not statistically significant or just contradicts an hypothesis—are crucial to scientific progress, because this latter is only made possible by a collective self-correcting process (Browman 1999; Knight 2003). Yet, a lack of null and negative results has been noticed in innumerable fields (Song et al. 2010; Gerber and Malhotra 2008; Howard et al. 2009; Dwan et al. 2008; Jennions and Moller 2002). Their absence from the literature not only inflates effect size estimates in meta-analyses, thus exaggerating the importance of phenomena, but can also cause a waste of resources replicating research that has already failed, and might even create fields based on completely non-existent phenomena (Ioannidis 2005, 2008b; Feigenbaum and Levy 1996; Song et al. 2010). In meta-analysis, publication bias can in part be corrected by assuming that negative results are simply never written up, and are left lying in scientists’ drawers (Formann 2008). However, this assumption is obviously naïve. A realistic scenario includes various forms of conscious and unconscious biases that affect all stages of research—e.g., study design, data collection and analysis, interpretation and publication—producing positive findings when there should be none, thus creating distortions that are difficult to correct a posteriori (Ioannidis 2008a; Marsh and Hanlon 2007; Jeng 2006). The problem is bound to be particularly acute in fields where theories and methods are less clearly defined, and true replication is rare or impossible (Palmer 2000; Kelly 2006; Evanschitzky et al. 2007).
This study verified whether the frequency of positive results has been increasing in the contemporary scientific literature. Papers that declared to have tested a hypothesis were searched in over 10,800 journals listed in the ISI-Essential Science Indicators database, excluding the highest-impact multidisciplinary journals like Science, Nature or PNAS. By reading the abstracts and eventually full-text of papers sampled at random from all disciplines, it was determined whether the authors of the study had concluded to have found a “positive” (full or partial) or “negative” (null or negative) support for the tested hypothesis. Analyses on a previous sample spanning the years 2000–2007 (N = 2,434) found that papers were more likely to report a positive result in disciplines and methodologies believed to be “softer” (e.g., Psychology vs. Space Science, behavioural vs. chemical analyses), and when the corresponding author worked in states of the USA where academics publish more papers per capita—findings which suggest that this measure is a reliable proxy of bias (Fanelli 2010a, b). This study expanded the analysis to include papers published in the 1990s (total N = 4,656).
Logistic regression slopes, standard error, Wald-test and significance, odds-ratio and 95% confidence interval predicting the likelihood of a paper to report a positive result depending on the following characteristics: year of publication, discipline of journal, national location of corresponding author (only countries with N ≥ 90), paper testing one versus multiple hypotheses (only the first of which was included in the analysis)
95% CI OR
Biology & Bioch.
Economics & Bus.
Environment & Ec.
Neurosci. & Beh.
Plant and An. Sc.
Psyc. & Psychiatry
Pharm. & Toxicol.
Social Sc. General
Country (>100 papers)
Logistic regression slopes, standard error, Wald-test statistic and significance, odds-ratio and 95% confidence interval predicting the likelihood of a paper to report a positive result depending on the following characteristics: year of publication, scientific domain of journal, geographical location of corresponding author, journal pertaining to applied versus pure disciplines, paper testing one versus multiple hypotheses (only the first of which was included in the analysis)
95% CI OR
Multiple vs. single Hp
Pure vs. applied
The proportion of papers that, having declared to have tested a hypothesis, reported a full or partial support has grown by more than 20% between 1990 and 2007. Underlying this overall increase were significant differences between disciplines and countries. The trend was significantly stronger in the social sciences (i.e., Psychology/Psychiatry, Economics & Business and Social Sciences, General) and in applied disciplines. Whilst a few disciplines showed a null or even a slightly declining trend (i.e., Space Science, Geosciences, Neuroscience & Behaviour, Plant and Animal Sciences), most were undergoing a significantly positive growth (e.g., Clinical Medicine, Pharmacology and Toxicology, Molecular Biology, Agricultural Sciences). Corresponding authors based in Asian countries (and in particular Japan) reported more positive results than in the US, who in turn reported more positives than in Europe, and particularly in the UK.
Methodological artefacts cannot explain the main findings of this study. Although performed by only one author, the coding was blind to year of publication and country of corresponding author. The coding was not blind to discipline, but the effects observed are independent of discipline or domain (Tables 1, 2). The coding was not blind to decade, having been performed first for 2000–2007 and then for 1990–1999. However, if this had introduced a bias in the coding, then we would expect a discontinuity between the years 1999 and 2000. Such discontinuity was not observed (Fig. 1) and there was no significant difference in the prevalence of positive results between decades when controlling for year of publication (B = 0.183 ± 0.144, Wald = 1.607, df = 1, P = 0.205, power to detect a small effect = 0.996). Indeed, positive results increased significantly within each decade (1990–1999: B = 0.85 ± 0.020, Wald = 18.978, P < 0.001; 2000–2007: B = 0.052 ± 0.024, Wald = 4.584, P = 0.032). This trend had not been noticed in a previous study covering the years 2000–2007, because year of publication had been treated as a purely confounding effect (i.e., tested as a categorical variable). Changing the parameterization of year in these regression models did not affect the estimation of the other parameters in any meaningful way, so previous conclusions remain valid (Fanelli 2010b).
To the best of the author’s knowledge, this is the first direct evidence that papers reporting negative results have decreased in frequency across disciplines. A recent study adopting a different approach reached similar conclusions by finding a decrease in the use of the term “non-significant difference” in abstracts from various databases (i.e., Science and Social Sciences Citation index, Medline, CAB), over a period of up to 40 years (Pautasso 2010). This latter study did not examine the actual outcome of each paper, and only examined the frequency of a sentence. This might have been an unreliable proxy of publication bias, as suggested by the fact that it yielded very high rates of non-significant results, contradicting ample evidence that these are the minority in all fields (Pautasso 2010). The reliability of the present study’s approach, which assessed the actual conclusions of each paper, is supported by a close agreement with previous surveys that found statistically significant results to be around 95% in psychology, 91% in ecology, and between 85 and 96% in biomedicine (Sterling et al. 1995; Csada et al. 1996; Kyzas et al. 2007).
An important limitation of the present study was the use of only one journal database, a choice made to ensure coverage of all domains and unambiguous attribution of each paper to one discipline. The ESI database is a subset of the ISI-Web of Knowledge, which is currently the main source of bibliometric and citation data for research evaluation around the world. The ISI system has been criticised in the past for over-representing journals from the US (Shelton et al. 2007), and for expanding more slowly than the actual growth of the scientific literature (Larsen and von Ins 2010). Such criticisms must be taken into account when evaluating the generality of this study, but cannot undermine its conclusions. A North-American bias within the database might be supported by this study’s data—in which over 50% of all papers had the corresponding author based in the US—but cannot explain away the various national patterns observed (see discussion below). The relatively slow growth of the database would imply that it is covering a decreasing proportion of ‘core’ journals, amidst an expanding volume of publications (Larsen and von Ins 2010). Could negative results be increasingly published in journals not included in the ESI database? This possibility remains to be tested, but it appears unlikely, given that a similar study on abstracts in other databases (see above) reached identical conclusions (Pautasso 2010). In any case, a growing positive-outcome bias within ESI-indexed journals, which supposedly cover the most important publications and most of the citations in each discipline, would still reflect important changes occurring within the scientific system.
Excluding methodological biases, what caused the patterns observed? The likelihood for a study to publish a positive result depends essentially on three factors (Fanelli 2010b), which we will examine in turn. (1) The hypotheses tested might be increasingly likely to be true. Obviously, this would not happen because sciences are closer to the truth today than 20 years ago, but because researchers might be addressing hypotheses that are likely to be confirmed, to make sure they will get “publishable” results. (2) The average statistical power of studies might have increased (for example, if the average sample size of studies had increased), boosting the discovery rate of true relationships (Ioannidis 2005). This would be good news, suggesting an improvement of methods and quality of studies. However, it would be unlikely to explain alone all the patterns observed (e.g., differences between disciplines). Moreover, it is unsupported: statistical power appears to be very low in all fields, and there is no evidence that it has grown over the years (Delong and Lang 1992; Jennions and Moller 2003; Maddock and Rossi 2001). (3) Negative results could be submitted and accepted for publication less frequently, or somehow turned into positive results through post hoc re-interpretation, re-analysis, selection or various forms of manipulation/fabrication.
In the lightest scenario of hypothesis 3, changes would be occurring only in how results are written up: “tests” would be increasingly mentioned in the paper only when the results are positive, and negative results would be either embedded in “positive” papers or presented as positive by inverting the original hypothesis. Such scenario, which would still be the symptom of growing pressures to present a positive outcome, was not supported by the data. In almost all papers examined, the hypotheses were stated in the traditional form, with the null hypothesis representing a “no effect”. There was no evidence that negative results were increasingly embedded in papers reporting positive ones: papers listing multiple hypotheses were more likely to report a negative support for the first one listed (Tables 1, 2), but their frequency has not grown significantly over the years (B = 0.019 ± 0.013, Wald = 2.058, P = 0.151, power to detect a small and medium effect = 0.543 and 0.999). There was also no evidence that negative results are communicated in other form, such as conference proceedings: a sample of these latter was initially included in the analysis by mistake (N = 106), and they tended to report more positive results (X2 = 3.289, df = 1, P = 0.076, power to detect a small effect = 0.999).
Higher frequencies of positive results from non-English speaking or non-US countries have been observed in past meta-analyses, and were usually attributed to editorial and peer-review biases, which might tend to reject papers from certain countries unless they present particularly strong or appealing results (Song et al. 2010; Yousefi-Nooraie et al. 2006). This could explain the higher rate of positive results from Asian countries, but cannot explain why the US have more positive results than the UK—an equally developed and English-speaking country. An editorial bias favouring the US would allow them to publish as many or more negative results than any other country, not fewer. Therefore, the differences observed suggest that researchers in the US have a stronger bias against negative results than in Europe. This hypothesis remains to be fully tested, but it would be independently supported by at least two studies, one showing that the US have a higher proportion of retractions due to data manipulation (Steen 2011), and the other suggesting a higher publication bias among union-productivity studies from the US (Doucouliagos et al. 2005). The causes of these differences remain to be understood, one possible factor being higher pressures to publish imposed by the US research system.
A common argument against concerns for publication bias is that negative results are justifiably ignored per se but become interesting, and are published, when they contradict important predictions and/or previous positive evidence—ensuring self-correction of the literature in the long run (Silvertown and McConway 1997). This does indeed seem to be the case at least in some biomedical fields, where the first paper to report a finding often shows extreme effects that subsequent replications reduce or contradict entirely (Ioannidis et al. 2001; Ioannidis and Trikalinos 2005). However, even if in the long run truth will prevail, in the short term resources go wasted in pursuing exaggerated or completely false findings (Ioannidis 2006). Moreover, this self-correcting principle will not work efficiently in fields where theoretical predictions are less accurate, methodologies less codified, and true replications rare. Such conditions increase the rate of both false positives and false negatives, and a research system that suppresses the latter will suffer the most severe distortions. This latter concern was supported by the finding that positive results were more frequent and had increased more rapidly in the social and many biological sciences [where theories and methods tend to be less codified and replication is rare (Fanelli 2010b; Schmidt 2009; Evanschitzky et al. 2007; Tsang and Kwan 1999; Kelly 2006; Palmer 2000; Jones et al. 2010; Hubbard and Vetter 1996)].
In conclusion, it must be emphasised that the strongest increase in positive results was observed in disciplines—like Clinical Medicine, Pharmacology & Toxicology, Molecular Biology—where concerns for publication bias had a longer history and several initiatives to prevent and correct it have been attempted, including registration of clinical trials, enforcing guidelines for accurate reporting, and creating journals of negative results (Bian and Wu 2010; Simera et al. 2010; Kundoor and Ahmed 2010; Knight 2003). This study suggests that such initiatives have not met their objectives so far, and the problem might be worsening.
The sentence “test* the hypothes*” was used to search all 10,837 journals available in the Essential Science Indicators database in December 2008, which classifies journals univocally in 22 disciplines (for ESI classification methodology see http://sciencewatch.com/about/met/). The discipline of mathematics, however, yielded no usable paper, while the “multidisciplinary” category, including papers such as Science or Nature, was excluded. Therefore, papers from 20 disciplines were included in the analysis. The disciplines were grouped in the following domains: Physical Sciences = Space Science, Chemistry, Computer Science, Engineering, Geosciences, Materials Science, Physics; Biological Sciences = Agricultural Sciences, Biology & Biochemistry, Clinical Medicine, Environment/Ecology, Immunology, Molecular Biology & Genetics, Microbiology, Neuroscience & Behaviour, Plant and Animal Sciences, Pharmacology & Toxicology; Social Sciences = Economics & Business, Psychiatry/Psychology, Social Sciences, General).
Papers were sampled in two phases: (1) papers published between 2000 and 2007 (already used in previous studies); (2) papers published between 1990 and 1999. In both phases, all retrieved titles were saved on bibliographic database software, and then up to a maximum 150 papers were sampled from each discipline. When the number of titles retrieved from one discipline exceeded 150, papers were selected using a random number generator. In one discipline, Plant and Animal Sciences, an additional 50 papers from the period 2000–2007 were analysed, in order to increase the statistical power.
By examining the abstract and/or full-text, the specific hypothesis tested in each paper was identified, and it was determined whether the authors had concluded to have found a positive (full or partial) or negative (null or negative) support. If more than one hypothesis was being tested, only the first one listed in the text was considered.
Meeting abstracts were excluded from sampling, whilst sampled papers were excluded when they either did not test a hypothesis (Total N = 546) or when there was not sufficient information (abstract unclear, and full-text not available) to determine the outcome (Total N = 38). While the former have no role in the analysis, the latter are technically missing values. Since access to full-text was lower for older articles and some disciplines, these missing values were unevenly distributed between disciplines (X2 = 92.770, P < 0.001), and were negatively associated with year (B = −0.080 ± 0.035, Wald = 5.352, P = 0.021). However, we can exclude that these missing values are an important confounding factor for three reasons: (1) there is no reason to believe that these missing papers are more likely to report positive than negative results; (2) they represent a very small fraction of the sample (i.e., 0.8%); (3) their prevalence is higher until 1994 and then declines rapidly, not matching the observed increase in positive results.
All data was extracted by the author. An untrained assistant who was given basic written instructions scored papers the same way as the author in 18 out of 20 cases, and picked up exactly the same sentences for hypothesis and conclusions in all but three cases. The discrepancies were easily explained, showing that the procedure is objective and replicable.
The country of location of each paper was attributed based on the address of the corresponding author. Geographical location was defined by the following groupings: US = United States; EU-15 = Austria, Belgium, Denmark, Finland, France, Germany, Greece, Ireland, Italy, Netherlands, Portugal, Spain, Sweden, United Kingdom; AS = China, Hong Kong, India, Japan, Singapore, South Korea, Taiwan).
Information on year of publication and country was retrieved after all papers had been coded. Therefore, the coding of papers as “positive” and “negative” was completely blind to year and country of origin.
Post hoc statistical power for indicator contrasts in logistic regression was calculated for main effects only (not interactions) assuming a bimodal distribution and sample frequency equal to that of the categorical variable with the smallest N (each case is specified in the text), to estimate the minimum power available. Base rate variance was measured with Nagelkerke R2 after removing the categorical variables of interest from the model. Post hoc power analysis for the effect of year assumed a standard uniform distribution of papers across years. Small, medium and large effects were assumed to equal Odds-Ratio = 1.5, 2.5 and 4.5, respectively.
All analyses were produced using the statistical packages R 2.12, SPSS 17.0 and G*Power 3.1.
Robin Williams gave helpful comments, and François Briatte crosschecked the coding protocol. This work was supported by a Marie Curie Intra-European Fellowship (Grant Agreement Number PIEF-GA-2008-221441) and a Leverhulme Early-Career fellowship (ECF/2010/0131).