The number of manuscripts submitted to most scholarly journals has increased tremendously over the last few decades and shows no sign of leveling off. Water, Air and Soil Pollution is no exception in this respect. In 1980, WASP published 89 articles, for a total of 1,008 pages. By 1990, the number of published articles had risen to 259, occupying 2,700 pages. That amounted to a roughly 200% increase in 10 years! In 2009, the journal published 297 articles, occupying 3,601 pages. In other words, in the last 30 years, the page count of WASP has increased by 260%. And still, this volume of published scholarship, in effect, is only the tip of the iceberg. Many more articles are submitted than are published. In recent years, probably thanks in part to the steadily rising impact factor of the journal (from 1.058 in 2004 to 1.398 in 2009), the total number of manuscripts submitted to WASP has sky rocketed. In 2009, the journal received a total of 1,204 manuscripts. Again, this situation is not unique. Many other journals report similar trends (e.g., Baveye et al. 2009; Baveye 2010). If the recent past, with the enormous increase in scholarly production in countries like China and India, is any indication of what lies ahead, a publishing “tsunami” is looming, for which we are hardly prepared at all (Baveye 2010).

This frenzy to publish, manifested across the board, is making it more and more difficult for editors and associate editors of scientific journals to secure peer reviews in a timely fashion for the manuscripts they handle (e.g., Baveye et al. 2009; Martin et al. 2009). We hear from fellow editors that it is becoming less and less uncommon to have to issue 10 to 15 invitations before one can secure the peer reviews needed to assess a given manuscript. Even though WASP fares generally better than this, it is certainly true that a high proportion of review invitations are declined.

Most often, researchers declining invitations to review invoke the fact that they are too busy to add yet another item to their already overcommitted schedule. Yet, however one looks at it, peer reviewing in any of its different flavors (e.g., open, single, or double-blind) is a crucial component of the publishing process. Nobody has yet come up with a viable alternative. Therefore, we need to find a way to convince our colleagues to peer review manuscripts more often. This can be done with a stick or with various types of carrots. Some discussion on what is likely to work best in each discipline would be most welcome.

The “sticks”, occasionally envisaged by editors (e.g., Anonymous 2009), are straightforward, at least to explain. For the peer-reviewing enterprise to function well, it is a civic duty for each researcher to review every year as many manuscripts as the number of reviews he or she is getting for his/her own papers. Therefore, someone submitting ten manuscripts in a given year should be willing to review 20–30 manuscripts during the same time frame (assuming that each manuscript is reviewed by three individuals, as is commonly the case). If this person does not meet the required quota of reviews, there would be some restrictions imposed on the submission of any new manuscript for publication. Boehlert et al. (2009) have advocated such a “stick” in the case of the submission of grant proposals (Anonymous 2009). Hauser and Fehr (2007), in an elaborate and provocative system of “penalties” or “costs,” suggest that for every manuscript that a reviewer refuses to review for a given journal, the journal add on a 1-week delay to reviewing their own next submission. For reviewers who accept to review but subsequently turn in their review late, Hauser and Fehr (2007) advocate an approach by which for every day since receipt of the manuscript for review plus the number of days past the deadline, the reviewer’s next personal submission to the journal will be held in editorial limbo for twice as long before it is sent for review.

However, any such implementation of an automatic accounting of reviewing activities is fraught with difficulties. For one thing, it would not prevent civically challenged individuals from defeating the system by writing short, useless reviews just to make the number. To eliminate that loophole, someone would have to assess whether reviews meet minimal standards of quality before they can be counted in the annual or running total. With this feature, and additional breaks, e.g., to allow young researchers to get established in their career, the review accounting system would rapidly become unwieldy.

An alternative approach, instead of sanctioning bad reviewing practices, would be to reward good ones. Individual journals do that in a number of ways, for example by making sure that the time and efforts of reviewers are used efficiently or by giving awards to outstanding reviewers (Baveye et al. 2009). The lucky few who are so singled out by such awards see their reviewing efforts validated. But fundamentally, these awards do not change the unsupportive atmosphere in which researchers review manuscripts. The problem has to be attacked at its root, in the current culture of universities and research centers, where administrators tend to equate research productivity with the number of articles published and the amount of extramural funding brought in. Annual activity reports occasionally require individuals to mention the number of manuscripts or grant proposals reviewed, but these data are currently unverifiable and therefore are generally assumed not to matter at all for promotions or salary adjustments.

Fortunately, there may be a way out of this difficulty. All major publishers have information on who reviews what, how long reviewers take to respond to invitations, and how long it takes them to send in their reviews. All it would take, in addition, would be for editors or associate editors who receive reviews to assess and record their usefulness, and one would have a very rich data set, which, if it were made available to universities and research centers in a way that preserves the anonymity of the peer-review process, could be used fruitfully to evaluate individuals’ reviewing performance and impact. Of course, one would have to agree on what constitutes a “useful” review. Pointing out typos and syntax errors in a manuscript is useful, but not hugely so. Identifying problems and offering ways to overcome them, proposing advice on how to analyze data better, or editing the text to increase its readability are all ways to make more substantial contributions. Generally, one might consider that there is a usefulness gradation from reviews focused on finding flaws in a manuscript to those focused on helping authors improve their text. Even a minimal amount of debate among scientists would likely result in a reliable set of guidelines on how to evaluate peer reviews. However, it is again recognized that this also requires additional time and effort.

Beyond making statistics available to decision makers, other options are also available to raise the level of visibility and recognition of peer reviews (Baveye 2010). Right or wrong, universities and research centers worldwide now rely more and more on some type of scientometric index, like the h-index, to evaluate the “impact” of their researchers. Many of these indexes, and certainly the h-index, implicitly encourage researchers to publish more articles, which detracts researchers from engaging in peer reviewing. In addition, none of these indexes, at the moment, encompasses in any way the often significant impact individuals can have on a discipline via their peer reviewing. One could conceive of scientometric indexes that would include some measure of peer-reviewing impact calculated on the basis of the statistics mentioned earlier.

Clearly, some of these developments will not happen overnight. Before any of them can materialize, a necessary first step is for researchers to discuss with their campus administration, or the managers of their research institution, about the crucial importance of peer reviewing and the need to have this activity valued in the same way that research, teaching, and outreach are. For society journals, members of the society are often contacted as possible reviewers and to serve on editorial boards and be appointed or are elected as editors and editor-in-chief. Such a debate is long overdue. Once administrators perceive that there is a need in this respect, are convinced that it will not cost a fortune to give peer reviewing more attention, and formulate a clear demand to librarians and publishers to help move things forward, there is every hope that sooner or later, the scholarly publishing enterprise will once again operate under optimal conditions. Moreover, excellence in reviewing improves virtually all manuscripts and alerts the editor to fraud and plagiarism. To that end, various journal editors have decided to publish editorials, partly or largely reproducing some of the arguments presented above, in the hope that they can help move things in a positive direction soon before the looming tsunami hits the publishing enterprise.