Disciplinary classification of science is essential to bibliometric analyses. Given the conceptual and technical difficulties in classifying individual papers into disciplines and specialties, most classifications systems are implemented at the journal level, which affects the classification of papers published in multidisciplinary journals. In order to investigate the effect of the different classification systems on bibliometric evaluations, this study compares the rankings of the most productive institutions and most productive authors using the two types of classifications. Results show that the classification of papers has less influence on rankings at the institutional level than at the individual level. Implications for bibliometric evaluations are discussed.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
In this study, China refers to mainland China, which is the geopolitical area under the direct jurisdiction of the People's Republic of China, excluding Hong Kong and Macau.
Since no level 2 discipline is under General Social Science, General Natural Science, Transportation, Aviation and Aerospace, and Multidiscipline, these five level 1 disciplines will also be investigated as level 2 disciplines.
Although the number of top 1‰ and tied authors is less than 10 in some disciplines, we set the minimum denominator as 10 to avoid the outliers.
The number of papers were under investigation in a given discipline was calculated as A + B − O as shown in Fig. 2.
Boyack, K. W., & Klavans, R. (2010). Co-citation analysis, bibliographic coupling, and direct citation: Which citation approach represents the research front most accurately? Journal of the American Society for Information Science and Technology, 61(12), 2389–2404.
Boyack, K. W., Newman, D., Duhon, R. J., Klavans, R., Patek, M., Biberstine, J. R., et al. (2011). Clustering more than two million biomedical publications: Comparing the accuracies of nine text-based similarity approaches. PLoS ONE, 6(3), e18029.
De Bellis, N. (2009). Bibliometrics and citation analysis: From the Science citation index to cybermetrics. Lanham, Md.: Scarecrow Press.
Glänzel, W., & Schubert, A. (2003). A new classification scheme of science fields and subfields designed for scientometric evaluation purposes. Scientometrics, 56(3), 357–367. https://doi.org/10.1023/a:1022378804087.
Klavans, R., & Boyack, K. W. (2017). Which type of citation analysis generates the most accurate taxonomy of scientific and technical knowledge? Journal of the Association for Information Science and Technology, 68(4), 984–998.
Larivière, V., Macaluso, B., Archambault, É., & Gingras, Y. (2010). Which scientific elites? On the concentration of research funds, publications and citations. Research Evaluation, 19(1), 45–53.
Leydesdorff, L., & Bornmann, L. (2016). The operationalization of “fields” as WoS subject categories (WCs) in evaluative bibliometrics: The cases of “library and information science” and “science & technology studies”. Journal of the Association for Information Science and Technology, 67(3), 707–714. https://doi.org/10.1002/asi.23408.
Melkers, J. (1993). Bibliometrics as a tool for analysis of R&D impacts. In B. Bozeman & J. Melkers (Eds.), Evaluating R&D impacts: Methods and practice. New York: Springer.
Porter, A. L., & Rafols, I. (2009). Is science becoming more interdisciplinary? Measuring and mapping six research fields over time. Scientometrics, 81(3), 719. https://doi.org/10.1007/s11192-008-2197-2.
Porter, A. L., Roessner, D. J., & Heberger, A. E. (2008). How interdisciplinary is a given body of research? Research Evaluation, 17(4), 273–282.
Price, D. J. D. S. (1963). Little science, big science. New York: Columbia University Press.
Shu, F., Dinneen, J. D., Asadi, B., & Julien, C.-A. (2017). Mapping science using library of congress subject headings. Journal of Informetrics, 11(4), 1080–1094.
Shu, F., Julien, C.-A., & Larivière, V. (2019a). Does the web of science accurately represent Chinese scientific performance? Journal of the Association for Information Science and Technology. https://doi.org/10.1002/asi.24184.
Shu, F., Julien, C.-A., Zhang, L., Qiu, J., Zhang, J., & Larivière, V. (2019b). Comparing journal and paper level classifications of science. Journal of Informetrics, 13(1), 202–225.
Toutkoushian, R. K., Porter, S. R., Danielson, C., & Hollis, P. R. (2003). Using publications counts to measure an institution’s research productivity. Research in Higher Education, 44(2), 121–148. https://doi.org/10.1023/A:1022070227966.
Waltman, L., & Eck, N. J. (2012). A new methodology for constructing a publication-level classification system of science. Journal of the American Society for Information Science and Technology, 63(12), 2378–2392.
Young, H., & Belanger, T. (Eds.). (1983). The ALA glossary of library and information science. Chicago: American Library Association.
Zhang, L., Rousseau, R., & Glänzel, W. (2016). Diversity of references as an indicator of the interdisciplinarity of journals: Taking similarity between subject fields into account. Journal of the Association for Information Science and Technology, 67(5), 1257–1265.
Zhongguo Tushuguan Fenleifa [Chinese Library Classification]. (2010). (5 ed.). Beijing: National Library of China Publishing House.
This study is supported by SSHRC Postdoctoral Fellowship (75620190196) and Social Sciences Foundation of China (19ZDA348).
About this article
Cite this article
Shu, F., Ma, Y., Qiu, J. et al. Classifications of science and their effects on bibliometric evaluations. Scientometrics 125, 2727–2744 (2020). https://doi.org/10.1007/s11192-020-03701-4
- Classification of science
- Chinese Library Classification
- Research evaluation