Universality of performance indicators based on citation and reference counts
- 555 Downloads
We find evidence for the universality of two relative bibliometric indicators of the quality of individual scientific publications taken from different data sets. One of these is a new index that considers both citation and reference counts. We demonstrate this universality for relatively well cited publications from a single institute, grouped by year of publication and by faculty or by department. We show similar behaviour in publications submitted to the arXiv e-print archive, grouped by year of submission and by sub-archive. We also find that for reasonably well cited papers this distribution is well fitted by a lognormal with a variance of around σ2 = 1.3 which is consistent with the results of Radicchi et al. (Proc Natl Acad Sci USA 105:17268–17272, 2008). Our work demonstrates that comparisons can be made between publications from different disciplines and publication dates, regardless of their citation count and without expensive access to the whole world-wide citation graph. Further, it shows that averages of the logarithm of such relative bibliometric indices deal with the issue of long tails and avoid the need for statistics based on lengthy ranking procedures.
KeywordsBibliometrics Citation analysis Crown indicator Universality
We would like to thank L.Waltman, N. J. van Eck and A. F. J. van Raan for useful comments. NH would like to thank the Nuffield Foundation for a Summer Student bursary. BSK would like to thank the Imperial College London UROP scheme for a bursary. We thank O. Kibaroglu and D. Hook for help in obtaining and interpreting the raw data, Thomson Reuters for allowing us to use the citation and reference counts for the data for the Institute, and P. Ginsparg for providing the data from arXiv.
- Daniel, H.-D. (1993/2004). Guardians of science. Fairness and reliability of peer review. Weinheim: Wiley Interscience.Google Scholar
- HEFCE report. (2012). “Assessment framework and guidance on submissions”, REF 02.2011, July 2011. Also “Part 2A Main Panel A Criteria”, and similarly named 2B and 2C documents. Retrieved February 15, 2012, from http://www.hefce.ac.uk/research/ref/.
- Lariviere, V., & Gingras, Y. (2010). The impact factor’s Matthew effect: A natural experiment in bibliometrics. Journal of the American Society for Information Science and Technology, 61, 424–427.Google Scholar
- Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables in citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology, 62, 1370–1381.Google Scholar
- Moed, H. F. (2005). Citation analysis in research evaluation. Berlin: Springer.Google Scholar
- Radicchi, F., & Castellano, C. (2009). On the fairness of using relative indicators for comparing citation performance in different disciplines. Scientometrics, 57, 85–90.Google Scholar
- Radicchi, F., & Castellano, C. (2011). Rescaling citations of publications in physics. Physics Review E, 83, 46–116.Google Scholar
- Thomson Reuters. (2009). Web of science. http://www.isiknowledge.com. Accessed March 2011.
- van Raan, A., Moed, H., & van Leeuwen, T. (2007). Scoping study on the use of bibliometric analysis to measure the quality of research in UK higher education institutions. Report to HEFCE by the Centre for Science and Technology Studies, Leiden University, November 2007.Google Scholar