A common misconception about peer review in biomedical sciences is that most journals practice “double-blind” review, wherein the authors of papers under consideration do not know the reviewers’ identities, and reviewers do not know the authors’ names or institutions. While this is the practice among most orthopaedic journals of which I am aware, it is not normative nor is it even common among the better medical journals of the world.

For example, the Journal of the American Medical Association (JAMA) family of journals practices single-blind peer review. The AMA Manual of Style, which guides the philosophies and practices of those journals, cites the challenges of achieving successful blinding as well as the lack of evidence supporting clear benefits of double-blinding as the reasons behind their preference for single-blind review [7]. In addition to JAMA and its many relatives, New England Journal of Medicine practices single-blind peer review, as do Lancet, Annals of Surgery (as well as Annals of Medicine), and Canadian Medical Association Journal [3]. A few journals even practice “open” peer review [1, 2] (or have tried and abandoned it [10]). Open peer review allows authors to know reviewers’ identities, and vice versa, with the hopes that it might result in fewer inflammatory comments from reviewers, and that it might inject an additional measure of transparency into the process. After all, reviewers can have conflicts of interest, too.

But the evidence supporting any of these approaches remains inferential and indirect. For obvious reasons, it is not easy to conduct true experimental studies on this topic, and the few experiments that have been done by and large were either underpowered [5] or have focused on whether blinding influences the quality of the review [6, 8] rather than its result. The scant evidence we have on the latter point suggests that blinding makes little difference in manuscript disposition [12].

Because the available evidence suggested that blinding did not seem to matter, Clinical Orthopaedics and Related Research ® has long allowed authors the choice of single- or double-blind peer review for the work they send us. In recent years, about half of the authors who have sent papers here have selected single-blind review, and about half have opted for a double-blind process. Because of this, our reviewers are accustomed to seeing papers both ways, which is unusual among biomedical journals. We therefore felt CORR ® was the perfect setting for an experimental study that might provide more-definitive evidence to guide the practices that we and other journals use. In particular, we wished to determine whether the knowledge of a prestigious author’s identity or institution might increase the likelihood that reviewers would recommend the work for publication. Other studies on the topic did not have the advantage of a cooperating journal in which both approaches to peer review were part of the journal’s normal workflow. We saw this opportunity as too important to pass up, and so CORR’s Board of Trustees endorsed conducting an experiment on the topic here.

In the randomized trial conducted at CORR, and published recently in JAMA [11], two versions of a fabricated manuscript were sent out to several hundred peer reviewers. The two versions were identical except that in one version the names of well-known authors from prestigious institutions were visible to reviewers, while in the other version reviewers were blinded to authors’ identities and universities. The influence of prestige increased the likelihood a reviewer would recommend publication of the paper by about 20%. This difference seems meaningful, though perhaps not overwhelming if one considers that typically three reviewers evaluate each paper. The influence of author prestige might therefore change the result of about one review in five.

Human nature being what it is, one might reasonably expect that the reputation of an author or institution should exert some pull on the peer-review process. In fact, I was surprised the differences were not more pronounced. But they were large enough that to keep things as fair as possible, the Senior Editor panel at CORR has decided that we will employ double-blind peer review for all scientific manuscripts here. We felt it important to have good-quality evidence before making a fundamental change to our external-review process. We now have that evidence.

We do not expect this policy to be a panacea. Experience—as well as evidence [5, 8, 12]—suggests that reviewers often can identify authors even when manuscripts are blinded. Such unintended unblinding is likely to become more common as an increasing number of orthopaedic projects are registered prospectively in clinical-trial databases such as www.clinicaltrials.gov. Such registration recently became a requirement for randomized trials in several important general-interest journals of our specialty, including CORR [9].

The controversies on this topic, the experiment’s somewhat-troubling findings [11], and the fact that even a thoughtfully arrived-at policy is unlikely to eliminate fully even this one kind of bias (from among the numerous others that certainly remain) highlight how very complicated peer review is, and how difficult it is to do well.

Winston Churchill offered this observation on the subject of democracy: “Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed, it has been said that democracy is the worst form of Government except all those other forms that have been tried from time to time” [4].

The same might be said for peer review.