## Abstract

The peer review system aims to be effective in separating unacceptable from acceptable manuscripts. However, a reviewer can distinguish them or not. If reviewers distinguish unacceptable from acceptable manuscripts they use a fine partition of categories. But, if reviewers do not distinguish them they use a coarse partition in the evaluation of manuscripts. Most reviewers learned how to evaluate a manuscript from good and bad experiences, and they have been characterized as zealots (who uncritically favor a manuscript), assassins (who advise rejection much more frequently than the norm), and mainstream referees. In this paper we use the quasi-species model to describe the evolution of recommendation profiles in peer review. A recommendation profile is composed of a reviewer recommendation for each manuscript category under a particular categorization of manuscripts (fine or coarse). We see the reviewer mind as being built up with recommendation profiles. Assassins, zealots and mainstream reviewers are “ecologically” interrelated species whose progeny tend to mutate through errors made in the process of reviewer training. We define the recommendation profile as replicator, and selection arises because different types of recommendation profiles tend to replicate at different rates. Our results help to explain why assassins and zealots evolutionary appear in peer review because of the evolutionary success of reviewers who do not distinguish acceptable and unacceptable manuscripts.

This is a preview of subscription content, access via your institution.

## References

Bull, J. J., Meyers, L. A., & Lachmann, M. (2005). Quasispecies made simple.

*PLoS Computational Biology*,*1*(6), e61. https://doi.org/10.1371/journal.pcbi.0010061.Burnham, J. C. (1990). The evolution of editorial peer review.

*JAMA*,*263*(10), 1323–1329.Campanario, J. M. (1998a). Peer review for journals as it stands today—Part 1.

*Science Communication*,*19*(3), 181–211.Campanario, J. M. (1998b). Peer review for journals as it stands today—Part 2.

*Science Communication*,*19*(4), 277–306.Chubin, D. E., & Hackett, E. J. (1990).

*Peerless science: Peer review and U.S. science policy*. Stony Brook, NY: State University of New York Press.Eigen, M., & Schuster, P. (1979).

*The hypercycle: A principle of natural self-organization*. Berlin: Springer.Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2015a). The author-editor game.

*Scientometrics*,*104*(1), 361–380. https://doi.org/10.1007/s11192-015-1566-x.Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2015b). Adverse selection of reviewers.

*Journal of the Association For Information Science and Technology*,*66*(6), 1252–1262. https://doi.org/10.1002/asi.23249.Garcia, J. A., Rodriguez-Sanchez, R., & Fdez-Valdivia, J. (2016). Why the referees’ reports I receive as an editor are so much better than the reports I receive as an author?

*Scientometrics*,*106*(3), 967–986. https://doi.org/10.1007/s11192-015-1827-8.Lee, Carole J., Sugimoto, Cassidy R., Zhang, Guo, & Cronin, Blaise. (2013). Bias in peer review.

*Journal of the American Society for Information Science and Technology*,*64*(1), 2–17.Mengel, F. (2012). On the evolution of coarse categories.

*Journal of Theoretical Biology*,*307*(21), 117–124. https://doi.org/10.1016/j.jtbi.2012.05.016.Merton, R. K. (1973).

*The sociology of science: Theoretical and empirical investigations*. Chicago: University of Chicago Press.Rodriguez-Sanchez, Rosa, Garcia, J. A., & Fdez-Valdivia, J. (2016). Evolutionary games between authors and their editors.

*Applied Mathematics and Computation*,*273*(15), 645–655. https://doi.org/10.1016/j.amc.2015.10.034.Schuster, P., & Swetina, J. (1988). Stationary mutant distributions and evolutionary optimization.

*Bulletin of Mathematical Biology*,*50*(6), 635–660. https://doi.org/10.1007/BF02460094.Siegelman, S. S. (1991). Assassins and zealots: Variations in peer review. Special report.

*Radiology*,*178*(3), 637–642. https://doi.org/10.1148/radiology.178.3.1994394.Souder, L. (2011). The ethics of scholarly peer review: A review of the literature.

*Learned Publishing*,*24*(1), 55–72.Tenopir, C., & King, D. W. (2007). Perceptions of value and value beyond perceptions: Measuring the quality and value of journal article readings.

*Serials*,*20*(3), 199–207.

## Acknowledgements

This research was sponsored by the Spanish Board for Science, Technology, and Innovation under Grant TIN2017-85542-P, and co-financed with European FEDER funds. Sincere thanks are due to the reviewers for their constructive suggestions.

## Author information

### Affiliations

### Corresponding author

## Appendix A: Proof

### Appendix A: Proof

We have to prove that there is an error threshold \({\hat{\epsilon }}\) decreasing in \(|f -1/2|\) such that whenever errors in the reviewer training are sufficiently frequent (\(\epsilon > {\hat{\epsilon }}\)), then the coarse partition (under which reviewers do not distinguish categories of unacceptable and acceptable manuscripts) yields higher average reward for a population of reviewers than the fine partition.

To this aim we follow the proof of result 1 in Mengel (2012). So, given the average reward of the reviewers’ population using the coarse partition \(K_\mathrm{C}\), \({\hat{\pi }} (K_\mathrm{C})\), and that using the fine partition \(K_\mathrm{F}\), \({\hat{\pi }} (K_\mathrm{F})\), we show that \({\hat{\pi }} (K_\mathrm{F}) - {\hat{\pi }} (K_\mathrm{C})\) decreases in \(\epsilon\) for all \(\epsilon < 1/2\).

The average reward of the reviewers’ population using the partition \(K_\mathrm{C}\) is the largest eigenvalue of the matrix

which is given by

and therefore, taking derivatives we find

hence, \(-1 \le \frac{\partial {\hat{\pi }} (K_\mathrm{C})}{\partial \epsilon } \le 0.\)

Similarly, the average reward of the reviewers’ population using the partition \(K_\mathrm{F}\) is the largest eigenvalue of the matrix

which solves the equilibrium of the quasi-species equations

From this equilibrium we get

where we denote by \(p_{(i)}\) the frequency of recommendation profile (*i*) in the population of peer reviewers using partition \(K_\mathrm{F}\), and there are four possible recommendation profiles, i.e., (1) = (reject, reject); (2) = (reject, accept); (3) = (accept, reject); (4)= (accept, accept). Therefore it follows that

We observe that taking differences between \({\hat{\pi }} (K_\mathrm{F})\) and \({\hat{\pi }} (K_\mathrm{C})\) we find (with \(f \not = 0.5\))

Now taking derivatives in \({\hat{\pi }} (K_\mathrm{F})\) we get

Therefore, taking differences between \(\frac{\partial {\hat{\pi }} (K_\mathrm{F})}{\partial \epsilon }\) and \(\frac{\partial {\hat{\pi }} (K_\mathrm{C})}{\partial \epsilon }\) we find

Hence, given *f*, both \(\frac{\partial {\hat{\pi }} (K_\mathrm{F})}{\partial \epsilon }\) and \(\frac{\partial {\hat{\pi }} (K_\mathrm{C})}{\partial \epsilon }\) are negative for all values of \(\epsilon\) (and continuous). Therefore, there is a \({\hat{\epsilon }}\), with \(0< {\hat{\epsilon }} < \frac{1}{4}\), such that

Also, by Lemma 2 in Mengel (2012), we have that, for any \(\epsilon >0\), \({\hat{\pi }} (K_\mathrm{F}) - {\hat{\pi }} (K_\mathrm{C})\) is maximized at \(f = 1/2\). Hence, following Mengel (2012), the upper bound on \({\hat{\epsilon }}(f)\) can be found by looking at the uniform case. Therefore, the set of the eigenvalues for \(W(K_\mathrm{F})\) is

To complete the proof we only have to observe that the maximal eigenvalue for the coarse partition \(W(K_\mathrm{C} )\) is given by \(\lambda =1/2\) which exceeds the maximal element of the set of eigenvalues for \(W(K_\mathrm{F})\), as \(\epsilon > 1/4\).

## Rights and permissions

## About this article

### Cite this article

Chamorro-Padial, J., Rodriguez-Sánchez, R., Fdez-Valdivia, J. *et al.* An evolutionary explanation of assassins and zealots in peer review.
*Scientometrics* **120, **1373–1385 (2019). https://doi.org/10.1007/s11192-019-03171-3

Received:

Published:

Issue Date:

### Keywords

- Peer review
- Reviewers
- Assassins
- Zealots
- Manuscript categories
- Quasi-species