Abstract
In a recent paper in this journal, Mason and Singh (Scientometrics 127:3683–3694, 2022) argue that since there are cases where a research paper is published in an academic journal that is ‘top-ranked’ in one subject category (or research area) but ‘bottom-ranked’ in another (based on some citation-based metric such as SCImago Journal Rankings (SJR) or Journal Citation Reports (JCR)), it follows that it is illogical to use such rankings as a proxy for research quality, impact, and prestige. They conclude that using such rankings for academic hiring, tenure, promotion, and funding is likewise illogical. In this discussion note, we argue that while their premise is true, their conclusion is a non sequitur.
Notes
Both of these normalize citation counts over an elapsed period relative to the number of articles published in the same period.
For simplicity of presentation, we only refer to SJR rankings all throughout this paper.
The details of the SJR ranking for the Journal of the Philosophy of History is available at https://www.scimagojr.com/journalsearch.php?q=16800154742&tip=sid&clean=0, while the British Journal for the History of Philosophy’s is at https://www.scimagojr.com/journalsearch.php?q=6500153189&tip=sid&clean=0.
To be precise, we could distinguish between a journal’s citable and non-citable items or contents (Guerrero-Bote and Moya-Anegón 2012; McVeigh and Mann 2009). ‘Citable items’ refer to research articles, discussion notes, commentaries, case reports, etc. On the other hand, ‘non-citable items’ refer to letters to the Editor, news, corrections, book reviews, etc. For our purposes, we only refer to a journal’s citable contents.
As Mason and Singh note, ‘the SJR platform provides an automated function for journals to share their best quartile’ (Mason and Singh 2020, 3691).
This argument is presented in (Mason et al., 2021).
References
Guerrero-Bote, V. P., & Moya-Anegón, F. (2012). A further step forward in measuring journals’ scientific prestige: The SJR2 indicator. Journal of Informetrics, 6, 674–688.
Mason, S., Merga, M. K., González Canché, M. S., & Mat Roni, S. (2021). The internationality of published higher education scholarship: How do the ‘top’ journals compare. Journal of Informetrics, 15, 101155. https://doi.org/10.1016/j.joi.2021.101155
Mason, S., & Singh, L. (2022). When a journal is both at the ‘top’ and the ‘bottom’: The illogicality of conflating citation-based metrics with quality. Scientometrics, 127, 3683–3694. https://doi.org/10.1007/s11192-022-04402-w
McVeigh, M. E., & Mann, S. J. (2009). The journal impact factor denominator: Defining citable (Counted) items. JAMA, 302, 1107–1109. https://doi.org/10.1001/jama.2009.1301
Niles, M. T., Schimanski, L. A., McKiernan, E. C., & Alperin, J. P. (2020). Why we publish where we do: Faculty publishing values and their relationship to review, promotion and tenure expectations. PLoS ONE, 15(3), e0228914. https://doi.org/10.1371/journal.pone.0228914
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Joaquin, J.J., Tan, R.R. & Biana, H.T. So, what if a journal is both at the ‘top’ and ‘bottom’: reply to Mason and Singh. Scientometrics 128, 5859–5863 (2023). https://doi.org/10.1007/s11192-023-04809-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11192-023-04809-z