Abstract
This study attempts to analyse the relationship between the peer-review activity of scholars registered in Publons and their research performance as reflected in Google Scholar. Using a scientometric approach, this work explores correlations between peer-review measures and bibliometric indicators. In addition, decision trees are used to explore which researchers (according to discipline, academic status and gender) make most of the reviews and which of them accept most of the papers, assuming that these are reasonable proxies for reviewing quality. Results show that there is a weak correlation between bibliometric indicators and peer-review activity. The decision tree analysis suggests that established male academics made the most reviews, while young female scholars are the most demanding reviewers. These results could help editors to select good reviewers as well as opening a new source of data for scientometrics analyses.
This is a preview of subscription content,
to check access.


Similar content being viewed by others
Notes
However, this practice could affect the efficiency of the service and the owners ask to be contacted for future studies or use the publicly available API (https://publons.com/api/).
Data on this study are publicly available in http://hdl.handle.net/10760/29799.
References
Abramo, G., & D’Angelo, C. A. (2011). Evaluating research: From informed peer review to bibliometrics. Scientometrics, 87(3), 499–514.
Aksnes, D. W., & Taxt, R. E. (2004). Peer reviews and bibliometric indicators: A comparative study at a Norwegian university. Research Evaluation, 13(1), 33–41.
Black, N., Van Rooyen, S., Godlee, F., Smith, R., & Evans, S. (1998). What makes a good reviewer and a good review for a general medical journal? Journal of the American Medical Association, 280(3), 231–233.
Bornmann, L., & Daniel, H. D. (2008). Selecting manuscripts for a high-impact journal through peer review: A citation analysis of communications that were accepted by Angewandte Chemie International Edition, or rejected but published elsewhere. Journal of the American Society for Information Science and Technology, 59(11), 1841–1852.
Burnham, J. C. (1990). The evolution of editorial peer review. Journal of the American Medical Association, 263(10), 1323–1329.
Callaham, M. L., Baxt, W. G., Waeckerle, J. F., & Wears, R. L. (1998). Reliability of editors’ subjective quality ratings of peer reviews of manuscripts. Journal of the American Medical Association, 280(3), 229–231.
Callaham, M., & McCulloch, C. (2011). Longitudinal trends in the performance of scientific peer reviewers. Annals of Emergency Medicine, 57(2), 141–148.
Callaham, M. L., & Tercier, J. (2007). The relationship of previous training and experience of journal peer reviewers to subsequent review quality. PLoS Medicine, 4(1), 32.
Cole, S., & Simon, G. A. (1981). Chance and consensus in peer review. Science, 214(4523), 881–886.
Donaldson, M. R., Hanson, K. C., Hasler, C. T., Clark, T. D., Hinch, S. G., & Cooke, S. J. (2010). Injecting youth into peer-review to increase its sustainability: A case study of ecology journals. Ideas in Ecology and Evolution, 3, 1–7.
Evans, A. T., McNutt, R. A., Fletcher, S. W., & Fletcher, R. H. (1993). The characteristics of peer reviewers who produce good-quality reviews. Journal of General Internal Medicine, 8(8), 422–428.
Gilbert, J. R., Williams, E. S., & Lundberg, G. D. (1994). Is there gender bias in JAMA’s peer review process? Jama, 272(2), 139–142.
Godlee, F., Gale, C. R., & Martyn, C. N. (1998). Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: A randomized controlled trial. Journal of the American Medical Association, 280(3), 237–240.
Haug, C. J. (2015). Peer-review fraud—Hacking the scientific publication process. New England Journal of Medicine, 2015(373), 2393–2395.
Kassirer, J. P., & Campion, E. W. (1994). Peer review: Crude and understudied, but indispensable. Journal of the American Medical Association, 272(2), 96–97.
Khabsa, M., & Giles, C. L. (2014). The number of scholarly documents on the public web. PLoS ONE, 9(5), e93949.
Kliewer, M. A., Freed, K. S., DeLong, D. M., Pickhardt, P. J., & Provenzale, J. M. (2005). Reviewing the reviewers: Comparison of review quality and reviewer characteristics at the American Journal of Roentgenology. American Journal of Roentgenology, 184(6), 1731–1735.
Kronick, D. A. (1990). Peer review in 18th-century scientific journalism. Journal of the American Medical Association, 263(10), 1321–1322.
Kumar, M. N. (2014). Review of the ethics and etiquettes of time management of manuscript peer review. Journal of Academic Ethics, 12(4), 333–346.
Kurihara, Y., & Colletti, P. M. (2013). How do reviewers affect the final outcome? Comparison of the quality of peer review and relative acceptance rates of submitted manuscripts. American Journal of Roentgenology, 201(3), 468–470.
Lantz, B. (2015). Machine learning with R. Birmingham: Packt Publishing.
Lerner, E. J. (2003). Fraud shows peer review flaws. Industrial Physicist, 8(6), 12–17.
Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1(2), 161–175.
McCarty, J. A., & Hastak, M. (2007). Segmentation approaches in data-mining: A comparison of RFM, CHAID, and logistic regression. Journal of Business Research, 60(6), 656–662.
Nguyen, V. M., Haddaway, N. R., Gutowsky, L. F., Wilson, A. D., Gallagher, A. J., Donaldson, M. R., et al. (2015). How long is too long in contemporary peer review? Perspectives from authors publishing in conservation biology journals. PLoS ONE, 10(8), e0132557.
Opthof, T., Coronel, R., & Janse, M. J. (2002). The significance of the peer review process against the background of bias: Priority ratings of reviewers and editors and the prediction of citation, the role of geographical bias. Cardiovascular Research, 56(3), 339–346.
Oxman, A. D., Guyatt, G. H., Singer, J., Goldsmith, C. H., Hutchison, B. G., Milner, R. A., et al. (1991). Agreement among reviewers of review articles. Journal of Clinical Epidemiology, 44(1), 91–98.
Patterson, M., & Harris, S. (2009). The relationship between reviewers’ quality-scores and number of citations for papers published in the journal Physics in Medicine and Biology from 2003–2005. Scientometrics, 80(2), 343–349.
Pautasso, M., & Schäfer, H. (2009). Peer review delay and selectivity in ecology journals. Scientometrics, 84(2), 307–315.
Peters, D. P., & Ceci, S. J. (1982). Peer-review practices of psychological journals: The fate of published articles, submitted again. Behavioral and Brain Sciences, 5(02), 187–195.
Price, D. (1963). Little science, big science. New York: Columbia University Press.
Purcell, G. P., Donovan, S. L., & Davidoff, F. (1998). Changes to manuscripts during the editorial process: Characterizing the evolution of a clinical paper. Journal of the American Medical Association, 280(3), 227–228.
Ritschard, G. (2014). CHAID and earlier supervised tree methods. In J. J. McArdle & G. Ritschard (Eds.), Contemporary issues in exploratory data mining in the behavioral sciences. New York: Routledge.
Rothwell, P. M., & Martyn, C. N. (2000). Reproducibility of peer review in clinical neuroscience: Is agreement between reviewers any greater than would be expected by chance alone? Brain, 123(9), 1964–1969.
Schriger, D. L., Kadera, S. P., & von Elm, E. (2016). Are reviewers’ scores influenced by citations to their own work? An analysis of submitted manuscripts and peer reviewer reports. Annals of Emergency Medicine, 67(3), 401–406.
Snell, L., & Spencer, J. (2005). Reviewers’ perceptions of the peer review process for a medical education journal. Medical Education, 39(1), 90–97.
Squazzoni, F., Bravo, G., & Takács, K. (2013). Does incentive provision increase the quality of peer review? An experimental study. Research Policy, 42(1), 287–294.
Stossel, T. P. (1985). Reviewer status and review quality. New England Journal of Medicine, 312(10), 658–659.
Thomas, P. R., & Watkins, D. S. (1998). Institutional research rankings via bibliometric analysis and direct peer review: A comparative case study with policy implications. Scientometrics, 41(3), 335–355.
Tite, L., & Schroter, S. (2007). Why do peer reviewers decline to review? A survey. Journal of Epidemiology and Community Health, 61(1), 9–12.
Travis, G. D. L., & Collins, H. M. (1991). New light on old boys: Cognitive and institutional particularism in the peer review system. Science, Technology and Human Values, 16(3), 322–341.
Van Raan, A. F. (2006). Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics, 67(3), 491–502.
Weller, A. C. (2001). Editorial peer review: Its strengths and weaknesses. ASIS&T monograph series. Medford, NJ: Information Today.
Wing, D. A., Benner, R. S., Petersen, R., Newcomb, R., & Scott, J. R. (2010). Differences in editorial board reviewer behavior based on gender. Journal of Women’s Health, 19(10), 1919–1923.
Yankauer, A. (1990). Who are the peer reviewers and how much do they review? Journal of the American Medical Association, 263(10), 1338–1340.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ortega, J.L. Are peer-review activities related to reviewer bibliometric performance? A scientometric analysis of Publons. Scientometrics 112, 947–962 (2017). https://doi.org/10.1007/s11192-017-2399-6
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-017-2399-6