Skip to main content
Log in

On Hochberg et al.’s “The tragedy of the reviewer commons”

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

We discuss each of the recommendations made by Hochberg et al. (Ecol Lett 12:2–4, 2009) to prevent the “tragedy of the reviewer commons”. Having scientific journals share a common database of reviewers would be to recreate a bureaucratic organization, where extra-scientific considerations prevailed. Pre-reviewing of papers by colleagues is a widespread practice but raises problems of coordination. Revising manuscripts in line with all reviewers’ recommendations presupposes that recommendations converge, which is acrobatic. Signing an undertaking that authors have taken into accounts all reviewers’ comments is both authoritarian and sterilizing. Sending previous comments with subsequent submissions to other journals amounts to creating a cartel and a single all-encompassing journal, which again is sterilizing. Using young scientists as reviewers is highly risky: they might prove very severe; and if they are not yet published authors themselves, the recommendation violates the principle of peer review. Asking reviewers to be more severe would only create a crisis in the publishing houses and actually increase reviewers’ workloads. The criticisms of the behavior of authors looking to publish in the best journals are unfair: it is natural for scholars to try to publish in the best journals and not to resign themselves to being second rate. Punishing lazy reviewers would only lower the quality of reports: instead, we favor the idea of paying reviewers “in kind” with, say, complimentary books or papers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. We will not discuss here whether this image, which comes from environmental sciences (Hardin 1968), is appropriate.

  2. Moreover, journals sometimes change publisher (e.g.: Papers in Regional Science passed from Springer to Wiley in 2005).

  3. As an illustration of the “cliquishness” in economics, Süssmuth et al. (2006) denounce the “institutional oligopolies” between editors and authors that may exist in European economic institutions.

  4. Snizek and Fuhrman (1979b) indicate that, in this matter, no bias is found between journals.

  5. This is discussed for today’s Russia Academy of Science; see Fortescue (1992).

  6. That does not mean that the peer-review process is never able to select the best papers (Bornmann and Daniel 2008).

  7. Coase won the Nobel Prize in Economics in 1991 and Williamson in 2009.

  8. See Frandsen and Wouters (2009) on the process that transforms a working-paper into a journal article in the field of Economics.

  9. On the number of authors and the number of papers see, for example Egghe (2008).

  10. Additionally, Nisonger (2002) shows that the relationship between editorial boards composition cannot serve to predict the quality of the journal in business, political science, and genetics journals.

  11. Hochberg et al. (2009) propose the following formulation: “We confirm that should our study have been previously submitted to another journal, we have taken all reviewers comments into account in revising our manuscript for submission to…” (2009, p. 3). They think that it could avoid revealing that the paper has been rejected before: this is an illusion because nobody withdraws an accepted paper!

  12. “Moreover, some journals are now asking authors of rejected manuscripts for permission to forward the reports of consenting reviewers to the journal where the authors intend to submit the revised study” (Hochberg et al. 2009, p. 3).

  13. This can be found in the website of The Economic Journal (published by Wiley-Blackwell):

    To improve speed and quality of decisions we encourage authors when submitting to us to include editors letters and referee reports from failed submissions at other journals. We of course reserve the right to use our own referees and provide our referees with copies of this correspondence but believe this step will be attractive to authors and further speed up the submission process.

    However, this policy seems to be particular to this journal and not systematic in Wiley-Blackwell.

  14. The term is a telling one: to my mind, a senior postgraduate is a post-graduate who is not getting on with his thesis. I would prefer “PhD student”.

  15. Hall’s classical theory (1968) may be called to back up this assertion. Hall established a correspondence between attitudes of professionalism and the behavior of professionals, and made a distinction between occupations and professions: “…occupations which are attempting to become professions may be able to instill in their members strong professional attitudes, while the more established professions may contain less idealistic members”. This principle can be transposed to the set of PhD students and post-docs who are not yet really researchers and those who are experienced academics.

  16. The gatekeepers are those “who decide what appears in the journal” (Braun et al. 2007, p. 542).

  17. Patterson and Harris (2009) have recommended reducing the acceptance rate of the journal Physics in Medicine and Biology from fifty percent to ten per cent to increase the impact factor.

  18. The following quotation speaks volumes (Braun and Dióspatonyi 2005, p. 113):

    A journal is the product of a publishing house, a commercial enterprise dedicated to preparing and distributing the periodical, but interested in it largely from an economic point of view. Even for a cause as noble as the advancement of science it is improbable one could find a benefactor willing to underwrite and promote a science journal without serious attention of the laws of the marketplace. This is not to imply that all journals are the property of independent commercial publishers. Indeed, many belong to scientific societies or similar organizations, but the printing and marketing activities are usually delegated to publishers working under contract.

  19. Such remuneration mechanically increases the readership of the books, even if only marginally so.

  20. Many other recommendations could be proposed: the reader may refer to Roberts (2009) for a very large set (22!) of recommendations mainly for editors.

References

  • AEA. (2009). Journal of Economic Literature (JEL) classification system. American Economic Association. http://www.aeaweb.org/journal/jel_class_system.php.

  • Blank, R. M. (1991). The effects of double-blind versus single-blind reviewing: Experimental evidence from the American economic review. The American Economic Review, 81(5), 1041–1067.

    Google Scholar 

  • Bloom, F. (1998). Human reviewers: The achilles heel of scientific journals in a digital era. Presented at INABIS ‘98––5th internet world congress on biomedical sciences at McMaster University, Canada, December, 7–16, keynote address. http://www.mcmaster.ca/inabis98/keynote/bloom/index.html.

  • Bornmann, L., & Daniel, H.-D. (2008). Selecting manuscripts for a high-impact journal through peer review: A citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere. Journal of the American Society for Information Science and Technology, 59(11), 1841–1852.

    Article  Google Scholar 

  • Bornmann, L., Nast, I., & Daniel, H.-D. (2008). Do editors and referees look for signs of scientific misconduct when reviewing manuscripts? A quantitative content analysis of studies that examined review criteria and reasons for accepting and rejecting manuscripts for publication. Scientometrics, 77(3), 415–432.

    Article  Google Scholar 

  • Bornmann, L., Weymuth, C., & Daniel, H.-D. (2009). A content analysis of referees’ comments: How do comments on manuscripts rejected by a high-impact journal and later published in either a low- or high-impact journal differ? Scientometrics. doi:10.1007/s11192-009-0011-4.

  • Braun, T., & Dióspatonyi, I. (2005). The journal gatekeepers of major publishing houses of core science journals. Scientometrics, 64(2), 113–120.

    Article  Google Scholar 

  • Braun, T., Dióspatonyi, I., Zsindely, S., & Zádora, E. (2007). Gatekeeper index versus impact factor of science journals. Scientometrics, 71(3), 541–543.

    Article  Google Scholar 

  • Campanario, J. M. (1996). Have referees rejected some of the most-cited articles of all times? Journal of the American Society for Information Science, 47(4), 302–310.

    Article  Google Scholar 

  • Campanario, J. M. (2009). Rejecting and resisting Nobel class discoveries: Accounts by Nobel Laureates. Scientometrics. doi:10.1007/s11192-008-2141-5.

  • Cicchetti, D. V. (1991). The reliability of peer review for manuscript and grant submissions: A cross-disciplinary investigation. Behavioral and Brain Sciences, 14, 119–186.

    Google Scholar 

  • Egghe, L. (2008). A model for the size-frequency function of coauthor pairs. Journal of the American Society for Information Science and Technology, 59(13), 2133–2137.

    Article  Google Scholar 

  • Finney, D. J. (1997). The responsible referee. Biometrics, 53(2), 715–719.

    Article  MATH  MathSciNet  Google Scholar 

  • Fiske, D. W., & Fogg, L. (1990). But the reviewers are making different criticisms of my paper! Diversity and uniqueness in reviewer comments. American Psychologist, 45(5), 591–598.

    Article  Google Scholar 

  • Fortescue, S. (1992). The Russian academy of sciences and the Soviet Academy of sciences: Continuity or disjunction? Minerva, 30(4), 459–478.

    Article  Google Scholar 

  • Frandsen, T. F., & Wouters, P. (2009). Turning working papers into journal articles: An exercise in microbibliometrics. Journal of the American Society for Information Science and Technology, 60(4), 728–739.

    Article  Google Scholar 

  • Gans, J. H., & Shepherd, G. B. (1994). How are the mighty fallen: Rejected classic articles by leading economists. The Journal of Economic Perspectives, 8(1), 165–179.

    Google Scholar 

  • Hall, R. H. (1968). Professionalization and bureaucratization. American Sociological Review, 33(1), 92–104.

    Article  Google Scholar 

  • Hardin, G. (1968). The tragedy of the commons. Science, 162, 1243–1248.

    Article  Google Scholar 

  • Hargens, L. L., & Herting, J. R. (1990a). Neglected considerations in the analysis of agreement among journal referees. Scientometrics, 19(1–2), 91–106.

    Article  Google Scholar 

  • Hargens, L. L., & Herting, J. R. (1990b). A new approach to referee’s assessments of manuscripts. Social Science Research, 19, 1–16.

    Article  Google Scholar 

  • Hargens, L. L., & Herting, J. R. (2006). Analyzing the association between referees’ recommendations and editors’ decisions. Scientometrics, 67(1), 15–26.

    Article  Google Scholar 

  • Hartley, J. (2005). Refereeing and the single author. Journal of Information Science, 31(3), 251–256.

    Article  Google Scholar 

  • Hauser, M., & Fehr, E. (2007). An incentive solution to the peer review problem. PLoS Biology, 5(4), e107. doi:10.1371/journal.pbio.0050107.

    Article  Google Scholar 

  • Hochberg, M. E., Chase, J. M., Gotelli, N. J., Hastings, A., & Naeem, S. (2009). The tragedy of the reviewer commons. Ecology Letters, 12, 2–4.

    Article  Google Scholar 

  • Lee, J. D., Vicente, K. J., Cassano, A., & Shearer, A. (2003). Can scientific impact be judged prospectively? A bibliometric test of Simonton’s model of creative productivity. Scientometrics, 56(2), 223–233.

    Article  Google Scholar 

  • Lindsey, D. (1988). Assessing precision in the manuscript review process: A little better than a diceroll. Scientometrics, 14(1–2), 75–82.

    Article  Google Scholar 

  • Lundstrom, K., & Baker, W. (2009). To give is better than to receive: The benefits of peer review to the reviewer’s own writing. Journal of Second Language Writing, 18, 30–43.

    Article  Google Scholar 

  • Mayo, N. E., Brophy, J., Goldberg, M. S., Klein, M. B., Miller, S., Platt, R. W., et al. (2006). Peering at peer review revealed high degree of chance associated with funding of grant applications. Journal of Clinical Epidemiology, 59, 842–848.

    Article  Google Scholar 

  • McDonald, S., & Kam, J. (2007). Aardvark et al.: Quality journals and gamesmanship in management studies. Journal of Information Science, 33(6), 702–717.

    Google Scholar 

  • Min, H.-T. (2005). Training students to become successful peer reviewers. System, 33, 293–308.

    Article  MathSciNet  Google Scholar 

  • Nelson, R. R., & Winter, S. G. (1982). An evolutionary theory of economic change. Cambridge Mass: Harvard University Press.

    Google Scholar 

  • Nisonger, T. E. (2002). The relationship between international editorial board composition and citation measures in political science, business, and genetics journals. Scientometrics, 54(2), 257–268.

    Article  Google Scholar 

  • Patterson, M. S., & Harris, S. (2009). The relationship between reviewers’ quality-scores and number of citations for papers published in the Journal Physics in Medicine and Biology from 2003–2005. Scientometrics, 80(2), 343–349.

    Article  Google Scholar 

  • Roberts, W. C. (2009). Reducing flaws in the review process of manuscripts submitted to medical journals for publication. American Journal Cardiology, 103, 891–892.

    Article  Google Scholar 

  • Schultz, D. M. (2009). Are three heads better than two? How the number of reviewers and editor behavior affect the rejection rate. Scientometrics. doi:10.1007/s11192-009-0084-0.

  • Seglen, P. O. (1996). Quantification of scientific article contents. Scientometrics, 35(3), 355–366.

    Article  Google Scholar 

  • Snizek, W. E., & Fuhrman, E. R. (1979a). Some factors affecting the evaluative content of book reviews and sociology. The American Sociologist, 14, 108–114.

    Google Scholar 

  • Snizek, W. E., & Fuhrman, E. R. (1979b). The evaluative content of book reviews in the American journal of sociology, contemporary sociology, and social forces. Contemporary Sociology, 8(3), 339–340.

    Article  Google Scholar 

  • Snizek, W. E., Fuhrman, E. R., & Wood, M. R. (1981). The effect of theory group association on the evaluative content of book reviews in sociology. The American Sociologist, 16, 185–195.

    Google Scholar 

  • Süssmuth, B., Steininger, M., & Ghio, S. (2006). Towards a European economics of economics: Monitoring a decade of top research and providing some explanation. Scientometrics, 66(3), 579–612.

    Article  Google Scholar 

  • Van Rees, C. J. (1987). How reviewers reach consensus on the value of literary works. Poetics, 16, 275–294.

    Article  Google Scholar 

  • von Mises, L. (1944). Bureaucracy. New Haven: Yale University Press. (third reprint of 1946).

    Google Scholar 

  • Weller, A. C. (2001). Editorial peer review: Its strengths and weaknesses, ASIS&T monograph series. Medford, NJ: Information Today, Inc.

    Google Scholar 

  • Zi-Lin, H. (2009). International collaboration does not have greater epistemic authority. Journal of the American Society for Information Science and Technology, 60(10), 2151–2164.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Louis de Mesnard.

Rights and permissions

Reprints and permissions

About this article

Cite this article

de Mesnard, L. On Hochberg et al.’s “The tragedy of the reviewer commons”. Scientometrics 84, 903–917 (2010). https://doi.org/10.1007/s11192-009-0141-8

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-009-0141-8

Keywords

Navigation