Scientists used to joke about the need for a “Journal of Negative Results”; the punch line was that a journal packed with no-difference studies would make for sleepy reading, and advertisers would not be interested. It turns out that online and open-access publishing have made it possible for not one, but several such journals to come into existence [7]. While they are not about to elbow Nature or Science out of the picture any time soon, these journals do fill a niche in scholarly publishing—but they should not have to.

All biomedical journals should consider publishing the results of negative and no-difference studies a primary responsibility. At Clinical Orthopaedics and Related Research ®, we believe negative and no-difference studies are an important part of our remit. We review and will publish articles regardless of the direction of the main finding—positive, negative, or no-difference.

In fact, this month in CORR ®, we publish a no-difference paper from Kim et al. [DOI: 10.1007/s11999-015-4425-4] in which the authors compared highly crosslinked-remelted polyethylene to less-crosslinked polyethylene; at a minimum of 5 years, they found no differences between the newer bearing material and the traditional polyethylene surface. They observe: “Given that highly crosslinked polyethylene (HXPLE) is newer, as-yet unproven, and more expensive than the proven technology (less-crosslinked polyethylene), we suggest not adopting HXLPE for clinical use until it shows superiority.” This conclusion highlights one important function of no-difference studies: They can decelerate the rate of adoption of unproven ideas.

There are at least three other important reasons to publish no-difference studies:

  1. 1.

    Applying different standards for publishing positive and no-difference studies distorts our ability to know whether new treatments really work. Systematic reviews sit atop the Level-of-Evidence pyramid [3], but they can only meta-analyze research that they can find. If publication bias inflates the likelihood that a positive trial will be published, then meta-analyses of the biased pool of results will systematically inflate the apparent benefits of treatment.

  2. 2.

    Numerous incentives already favor the production and dissemination of positive studies. Numerous factors nudge things in this direction. Scientists’ own perceptions may be at the top of the list; the “file-drawer-phenomenon,” in which investigators wrongly imagine that their no-difference results are less important than splashy findings of superiority, can result in researchers not taking the time to write up or submit their negative studies, instead consigning them to the “file drawer” [9]. Reviewers’ preferences matter as well; a randomized, well-controlled experimental study of peer review found that reviewers have strong preferences for positive findings over no-difference studies [2]. Finally, numerous statistical issues tend to drive results in a positive direction, including significance hunting, data dredging, posthoc hypothesis testing [8], premature halting of no-difference trials for inappropriate reasons [5], influence from the funding sources on the comparator groups chosen as study controls [4], and even on whether the study’s findings can be released [1]. Journals, as arbiters of what is published, have an obligation to be mindful of the downward pressure against no-difference results.

  3. 3.

    If the universe of studies published does not reflect clinicians’ realities, expensive and time-consuming research efforts will be duplicated. Imagine that positive-outcome bias results in several studies getting published that demonstrate apparent efficacy of a treatment, while journals have rejected several other no-difference studies. If practicing surgeons observe that the treatment does not work as well as the published (positive) trials suggest, researchers will design studies asking why, and in the process they will repeat the no-difference trials that—unbeknownst to them—were done but not published.

It is important to realize that some no-difference studies fail to detect differences between treatment groups that may well have been present. Because of this, editors need to evaluate these studies with attention to particular details that may not be as important in studies that conclude superiority of one or another treatment. Blunt outcomes tools, insufficient sample size or statistical power, and any of a number of other problems can cause a study to incorrectly draw a negative finding. Readers should assess these studies carefully: A no-difference result mated with an immodestly written discussion might beget a misleading conclusion. Caveat lector.

Interestingly, though, editors probably can be more permissive about certain sources of bias in no-difference studies (and readers can be more forgiving of them) than in studies that claim the superiority of a new treatment. Here’s why: Selection bias, loss to followup, and certain kinds of assessor bias all tend to inflate the apparent benefits of treatment. Consider a study in which the investigators chose only ideal patients to receive the new treatment, lost a large proportion of them to followup (remember, missing patients tend to fare worse than those accounted for [6]), and allowed the surgeon to assess his or her own work. Claims of efficacy made by this study should be viewed skeptically. By contrast, if a study with these problems were to conclude that the new treatment is ineffective—despite all those sources of bias, which would be expected to inflate the apparent benefits of treatment—we might be more comfortable taking the investigators at their word.

Studies with obvious methodological flaws such as insufficiently sensitive outcomes tools, sloppily performed interventions, or poorly characterized patient-selection processes should not be published regardless of what they conclude. And while investigators should try to design adequately powered studies, many factors can cause a good experiment to fall short in terms of statistical power; this alone should not disqualify an otherwise well-designed and fairly presented study. Data from such studies can be pooled or systematically reviewed later on if they are published. This is much more difficult to do if no-difference or negative trials fail to find their way out into the world.

At CORR ®, we are as excited by negative and no-difference studies as we are by positive ones. Readers should be, too.