Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Journal of Peace Research has now introduced ‘double-blind’ or ‘masked’ review procedures. In other words, the author’s name and affiliation are removed from the manuscript. This article explains why we make this change now, why we did not make it before, and why the decision was not obvious. The main argument in favor of blinding is that the reviewer should judge the article on the basis of its merit rather than on the basis of the prior reputation or record of the author. However, the empirical evidence whether or not blinding makes any difference is mixed, and the practice varies greatly among quality journals. We make this change mainly because double-blind seems to be the accepted standard among journals that cater to the same readers and authors, and because we do not want there to be any doubt as to the journal’s commitment to peer review. At the same time, we reiterate our commitment to transparency, by permitting referees to sign their reports if they want to, by letting the authors see all the referee reports, by copying the editorial correspondence to the reviewers, and by strengthening our data replication policies.

From the beginning of 2002, Journal of Peace Research has introduced double-blind review procedures. That is, not only will the identity of the reviewer normally be unknown to the author, but we will also keep the name of the author from the referee.Footnote 1

When JPR adopted external peer review in late 1983—before that time the articles were reviewed only by members of the editorial committee—it was thought impractical and unnecessary to anonymize the articles. We have always protected the identity of those reviewers who would like to be anonymous; reviewers have the option of signing their referee report if they wish to be identified, but they are under no pressure to do so. Hiding the identity of the author from the referee is a slightly trickier issue. In many cases, it is quite easy for an experienced reviewer to identify the author, particularly when an earlier version of the article has been presented at a major conference. With the increasing posting of papers on conference websites and personal homepages, and the common software feature of filing the name of the document creator with the document, it has become even easier for a curious reviewer to establish the identity of the author. For an author to identify the reviewer is a great deal more difficult, although one can sometimes have a fair guess. Journals that now circulate referee reports in electronic form would do well to take note of the less-known features of their computer software.

The main argument in favor of blinding (or masking) the article to a reviewer is straightforward and quite compelling: the reviewer is asked to judge the article on the basis of its merit rather than on the basis of the prior reputation or record of the author. It is not obvious that a high-status author will necessarily get kinder treatment from anonymous reviewers—some junior scholars enjoy the opportunity of trashing the work of a pillar of the profession, while remaining anonymous themselves. But the idea is simply to avoid irrelevant considerations in the editorial process. Moreover, the argument that removing the author’s identity from a manuscript is time-consuming is less relevant in the age of word processing.

Nevertheless, we have felt that since a good proportion of the reviewers were likely to guess the identity of the author, it was better to be certain that they knew. When a review is hostile (or friendly) in excess of its substantive argument, and irrelevant considerations seem to be at work, one can adjust for that in the editorial judgment. Making decisions is, after all, the responsibility of the editor. Outside reviews provide advice, but the editor cannot pass the buck. At the level where the decision is made, the identity of the article’s author is known.

Like most academic traditions, peer review is a practice that originated in the natural sciences. The British Medical Journal used it over 150 years ago (Lock 1986: 3). But double-blind reviewing is by no means a universal practice.Footnote 2 Most medical journals do not use it (Davidoff 1998). Neither do the journals of the Royal Society in the UK,Footnote 3 but it is ‘a common practice in educational research journals’ (Abell 1994: 225). Some management journals are reported to practice a severe form of blind review, where referees are requested to disqualify themselves if they know who the authors are, causing one analyst to speculate that ‘only those ignorant of the literature would be able to provide reviews for leading researchers’ (Armstrong 1997: 70). My own informal survey and personal experience as an author indicate that in political science and international relations, double-blind reviewing is very much the norm—provided the journal is peer reviewed in the first place.

The guidelines for referees in the Science Editors’ Handbook published by the European Association of Science Editors take an agnostic position on anonymity generally.Footnote 4 The publication manual of the American Psychological Association, a book that does not shy away from detailed instructions to authors and editors, is neutral with regard to masked review (APA 1994: 248). The most substantial evidence on editorial practice is found in a survey of 200 journals from all fields conducted in 2000, which found that only 40 % concealed the author’s identity (while 90 % concealed the referee’s identity from the author).Footnote 5 Among the natural sciences, there was a clear majority (2:1) against blinding the article, while in the social sciences, law, and the humanities, there was an even clearer majority in favor (3:1).Footnote 6

I can only speculate about why double-blind reviewing is more common in the social sciences than in the natural sciences. Perhaps the lack of widely accepted theoretical and methodological paradigms in the social sciences leaves them more exposed to partial and irrelevant judgments. Social scientists may also be more alert than natural scientists to issues of fairness and the social functions of evaluation systems.

There is a small experimental and empirical literature on the effect of blind reviewing, but the evidence is mixed (Armstrong 1997; Lock 1986). Some studies find blind reviews to be fairer, others find little difference, and some have even found that blinding harms quality.

At the end of the day, the strongest argument for introducing double-blind procedures in JPR is probably that they are so widely accepted in comparable journals. Any journal that does otherwise will easily be seen as a deviant. We have heard very few objections by authors to our practice, but several reviewers have found it unusual and a few have complained. We cannot exclude the possibility that some authors may have avoided submitting to JPR because of our excessive openness. We do not want this issue to raise any doubt about the commitment of JPR to peer review and impartial quality control. Therefore, we have decided to make articles anonymous before sending them out for review. This change has already been implemented.

We ask all authors to prepare a separate front page with their name and affiliation. This page will be removed before the manuscript is circulated to reviewers. The brief biographical note, which will be required when a manuscript is accepted for publication, should be on a separate, final page. Authors are welcome to keep self-references to published work or conference papers, but should refer to them in the third person rather than by such phrases as ‘our work’ or ‘we have shown earlier’.

We are as strongly committed to transparency as we are to peer review. For that reason, our standard practice has been to circulate to each referee a copy of our letter to the author and all the referee reports. In this way, the referee can see what use we have made of his or her input to the editorial process, and in what way that input is similar to or different from that of other referees. On this point, we have actually had quite a bit of feedback from our referees, and it has been overwhelmingly positive. We hope that this openness will contribute to even better reviewing in the future. Although double-blind procedures are a little more cumbersome, we will maintain this practice. We will continue to copy the editorial correspondence to the referees, but we will remove the name of the addressee and take care to write the letter in a way that does not hint at the author’s identity.

Another way in which we promote transparency is through our replication policy. Since 1998, we have required that authors of articles with systematic empirical information make their data (and associated programming files) available on the web or in a similar fashion. We have also established our own JPR data replication page (at www.prio.no/jpr/datasets), where we provide links to the web addresses where the authors have posted their data. Where the authors do not have a suitable website, we post the data on our own website. As of 1 March 2002, this page contains references to 79 datasets.

The profession has a long way to go before the replication norms are practiced smoothly. Anyone who tests the links on our website will discover that some of them lead nowhere; the author has moved, the web address has changed, or (in a very few cases) the author has changed his or her mind or delayed posting the data. In other cases, the data have been posted but only in a general form. The reader is not privileged to know exactly what subset of data was used, and there is no information on coding procedures or calculating routines. Other journals that profess to have a replication policy—and they include most of the journals that are fairly similar to JPR in their approach to world politics—have similar problems (Gleditsch/Metelits/Strand 2003).

We are slowly but deliberately strengthening our replication requirement. Authors are asked to supply the data to the editorial office with the final version of the manuscript. We will make the data available directly from JPR if the author’s website fails to deliver the goods. We hope eventually to find the resources to inspect the data, codebooks, and log files when submitted to us with a view to making sure that replication is actually possible from what is available. We have not yet seriously entertained the idea that replication data might be made available to referees. But we will monitor the international discussion with a view to keeping JPR at the forefront of the replication movement. We do this in the firm conviction that King (1995) was right when he portrayed replication as benefiting not only the profession but also the scholar who makes his or her data available. Having other people use your work is a road to academic recognition and should be encouraged by authors as much as by journals. In a study of citations to JPR articles in the period 1991–2001, we have found that articles that provide data are more frequently cited, even when controlling for a number of other relevant factors (Gleditsch/Metelits/Strand 2003). Although our replication policy is primarily designed to serve the discipline as a whole, we hope that authors who are given this extra burden of documentation recognize that it is also likely to serve their own interests.