Advertisement

Scientometrics

, Volume 120, Issue 3, pp 1373–1385 | Cite as

An evolutionary explanation of assassins and zealots in peer review

  • Jorge Chamorro-Padial
  • Rosa Rodriguez-Sánchez
  • J. Fdez-Valdivia
  • J. A. GarciaEmail author
Article

Abstract

The peer review system aims to be effective in separating unacceptable from acceptable manuscripts. However, a reviewer can distinguish them or not. If reviewers distinguish unacceptable from acceptable manuscripts they use a fine partition of categories. But, if reviewers do not distinguish them they use a coarse partition in the evaluation of manuscripts. Most reviewers learned how to evaluate a manuscript from good and bad experiences, and they have been characterized as zealots (who uncritically favor a manuscript), assassins (who advise rejection much more frequently than the norm), and mainstream referees. In this paper we use the quasi-species model to describe the evolution of recommendation profiles in peer review. A recommendation profile is composed of a reviewer recommendation for each manuscript category under a particular categorization of manuscripts (fine or coarse). We see the reviewer mind as being built up with recommendation profiles. Assassins, zealots and mainstream reviewers are “ecologically” interrelated species whose progeny tend to mutate through errors made in the process of reviewer training. We define the recommendation profile as replicator, and selection arises because different types of recommendation profiles tend to replicate at different rates. Our results help to explain why assassins and zealots evolutionary appear in peer review because of the evolutionary success of reviewers who do not distinguish acceptable and unacceptable manuscripts.

Keywords

Peer review Reviewers Assassins Zealots Manuscript categories Quasi-species 

Notes

Acknowledgements

This research was sponsored by the Spanish Board for Science, Technology, and Innovation under Grant TIN2017-85542-P, and co-financed with European FEDER funds. Sincere thanks are due to the reviewers for their constructive suggestions.

References

  1. Bull, J. J., Meyers, L. A., & Lachmann, M. (2005). Quasispecies made simple. PLoS Computational Biology, 1(6), e61.  https://doi.org/10.1371/journal.pcbi.0010061.CrossRefGoogle Scholar
  2. Burnham, J. C. (1990). The evolution of editorial peer review. JAMA, 263(10), 1323–1329.CrossRefGoogle Scholar
  3. Campanario, J. M. (1998a). Peer review for journals as it stands today—Part 1. Science Communication, 19(3), 181–211.CrossRefGoogle Scholar
  4. Campanario, J. M. (1998b). Peer review for journals as it stands today—Part 2. Science Communication, 19(4), 277–306.CrossRefGoogle Scholar
  5. Chubin, D. E., & Hackett, E. J. (1990). Peerless science: Peer review and U.S. science policy. Stony Brook, NY: State University of New York Press.Google Scholar
  6. Eigen, M., & Schuster, P. (1979). The hypercycle: A principle of natural self-organization. Berlin: Springer.CrossRefGoogle Scholar
  7. Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2015a). The author-editor game. Scientometrics, 104(1), 361–380.  https://doi.org/10.1007/s11192-015-1566-x.CrossRefGoogle Scholar
  8. Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2015b). Adverse selection of reviewers. Journal of the Association For Information Science and Technology, 66(6), 1252–1262.  https://doi.org/10.1002/asi.23249.CrossRefGoogle Scholar
  9. Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2016). Why the referees’ reports I receive as an editor are so much better than the reports I receive as an author? Scientometrics, 106(3), 967–986.  https://doi.org/10.1007/s11192-015-1827-8.CrossRefGoogle Scholar
  10. Lee, Carole J., Sugimoto, Cassidy R., Zhang, Guo, & Cronin, Blaise. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64(1), 2–17.CrossRefGoogle Scholar
  11. Mengel, F. (2012). On the evolution of coarse categories. Journal of Theoretical Biology, 307(21), 117–124.  https://doi.org/10.1016/j.jtbi.2012.05.016.MathSciNetCrossRefzbMATHGoogle Scholar
  12. Merton, R. K. (1973). The sociology of science: Theoretical and empirical investigations. Chicago: University of Chicago Press.Google Scholar
  13. Rodriguez-Sanchez, Rosa, Garcia, J. A., & Fdez-Valdivia, J. (2016). Evolutionary games between authors and their editors. Applied Mathematics and Computation, 273(15), 645–655.  https://doi.org/10.1016/j.amc.2015.10.034.MathSciNetCrossRefzbMATHGoogle Scholar
  14. Schuster, P., & Swetina, J. (1988). Stationary mutant distributions and evolutionary optimization. Bulletin of Mathematical Biology, 50(6), 635–660.  https://doi.org/10.1007/BF02460094.MathSciNetCrossRefzbMATHGoogle Scholar
  15. Siegelman, S. S. (1991). Assassins and zealots: Variations in peer review. Special report. Radiology, 178(3), 637–642.  https://doi.org/10.1148/radiology.178.3.1994394.CrossRefGoogle Scholar
  16. Souder, L. (2011). The ethics of scholarly peer review: A review of the literature. Learned Publishing, 24(1), 55–72.CrossRefGoogle Scholar
  17. Tenopir, C., & King, D. W. (2007). Perceptions of value and value beyond perceptions: Measuring the quality and value of journal article readings. Serials, 20(3), 199–207.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2019

Authors and Affiliations

  1. 1.Departamento de Ciencias de la Computación e I. A., CITIC-UGRUniversidad de GranadaGranadaSpain
  2. 2.CITIC-UGRUniversidad de GranadaGranadaSpain

Personalised recommendations