Examining national citation impact by comparing developments in a fixed and a dynamic journal set
In order to examine potential effects of methodological choices influencing developments in relative citation scores for countries, a fixed journal set comprising of 3232 journals continuously indexed in the Web of Science from 1981 to 2014 is constructed. From this restricted set, a citation database depicting the citing relations between the journal publications is formed and relative citation scores based on full and fractional counting are calculated for the whole period. Previous longitudinal studies of citation impact show stable rankings between countries. To examine such findings coming from a dynamic set of journals for potential “database effects”, we compare them to our fixed set. We find that relative developments in impact scores, country profiles and rankings are both very stable and very similar within and between the two journal sets as well as counting methods. We do see a small “inflation factor” as citation scores generally are somewhat lower for high-performing countries in the fixed set compared to the dynamic set. Consequently, using an ever-decreasing set of journals compared to the dynamic set, we are still able to reproduce accurately the developments in impact scores and the rankings between the countries found in the dynamic set. Hence, potential effects of methodological choices seem to be of limited importance compared to the stability of citation networks.
KeywordsFixed journal set Database effects National citation impact Longitudinal study
The research was funded by the Research Council of Norway, Grant No. 256223 (the R-QUEST centre).
- Bonitz, M. (2002). Ranking of nations and heightened competition in Matthew core journals: Two faces of the Matthew effect for countries. Library Trends, 50(3), 440–460.Google Scholar
- Garfield, E. (1996). The significant scientific literature appears in a small core of journals. The Scientist, 10(17), 13–15.Google Scholar
- Garfield, E., & Sher, I. H. (1963). Genetics Citation Index. http://www.garfield.library.upenn.edu/essays/v7p515y1984.pdf, Philadelphia, PA.
- Narin, F. (1976). Evaluative bibliometrics: The use of publication and citation analysis in the evaluation of scientific activity. Washington, DC: Computer Horizons Inc.Google Scholar
- Narin, F., Stevens, K., Anderson, J., Collins, P., Irvine, J., Isard, P., et al. (1988). On-line approaches to measuring national scientific output: A cautionary tale. Science and Public Policy, 15(3), 153–161.Google Scholar
- Schneider, J. W. (2010). Bibliometric research performance Indicators for the Nordic Countries. Retrieved from https://www.nordforsk.org/en/publications/publications_container/bibliometric-research-performance-indicators-for-the-nordic-countries.
- Schneider, J. W., & Aagaard, K. (2015). Developments in Danish research performance. Retrieved from http://ufm.dk/publikationer/2015/filer/dfir_scientometric_analysis_final.pdf.
- Stahlschmidt, S., & Hinze, S. (2016). How does the scientific progress in developing countries affect bibliometric impact measures of developed countries? A counterfactual case study on China. In Paper presented at the 21st international conference on science and technology indicators (STI), Universitat Politècnica de València, Spain.Google Scholar
- Studer, K. E., & Chubin, D. E. (1980). The cancer mission: Social contexts of biomedical research. London: Sage.Google Scholar
- Testa, J. (2011). The globalization of web of science: 2005–2010. http://wokinfo.com/media/pdf/globalwos-essay.
- Waltman, L., Calero-Medina, C., Kosten, J., Noyons, E. C. M., Tijssen, R. J. W., van Eck, N. J., et al. (2012). The Leiden ranking 2011/2012: Data collection, indicators, and interpretation. Journal of the American Society for Information Science and Technology, 63(12), 2419–2432. https://doi.org/10.1002/asi.22708.CrossRefGoogle Scholar