Scientometrics

, Volume 93, Issue 2, pp 473–495 | Cite as

Universality of performance indicators based on citation and reference counts

Article

Abstract

We find evidence for the universality of two relative bibliometric indicators of the quality of individual scientific publications taken from different data sets. One of these is a new index that considers both citation and reference counts. We demonstrate this universality for relatively well cited publications from a single institute, grouped by year of publication and by faculty or by department. We show similar behaviour in publications submitted to the arXiv e-print archive, grouped by year of submission and by sub-archive. We also find that for reasonably well cited papers this distribution is well fitted by a lognormal with a variance of around σ2 = 1.3 which is consistent with the results of Radicchi et al. (Proc Natl Acad Sci USA 105:17268–17272, 2008). Our work demonstrates that comparisons can be made between publications from different disciplines and publication dates, regardless of their citation count and without expensive access to the whole world-wide citation graph. Further, it shows that averages of the logarithm of such relative bibliometric indices deal with the issue of long tails and avoid the need for statistics based on lengthy ranking procedures.

Keywords

Bibliometrics Citation analysis Crown indicator Universality 

Supplementary material

11192_2012_694_MOESM1_ESM.pdf (1.2 mb)
PDF (1267 KB)

References

  1. Abbt, M., Limpert, E., & Stahel, W. A. (2001). Log-normal distributions across the sciences: Keys and clues. Bioscience, 51, 341–352.CrossRefGoogle Scholar
  2. Adams, J., Gurney, K. A., & Jackson, L. (2008). Calibrating the zoom: A test of Zitts hypothesis. Scientometrics, 75, 81–95.CrossRefGoogle Scholar
  3. Ahn, Y.-Y., Bagrow, J. P., & Lehmann, S. (2010). Link communities reveal multiscale complexity in networks. Nature, 466, 761–764.CrossRefGoogle Scholar
  4. Aksnes, D. W., & Taxt, R. E. (2004). Peer reviews and bibliometric indicators: A comparative study at a Norwegian University. Research Evaluation, 13, 33–41.CrossRefGoogle Scholar
  5. Albarrán, P., Crespo, J., Ortuño, I., & Ruiz-Castillo, J. (2011). The skewness of science in 219 sub-fields and a number of aggregates. Scientometrics, 88, 385–397.CrossRefGoogle Scholar
  6. Bornmann, L., & Mutz, R. (2011). Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization. Journal of Informetrics, 5(1), 228–230.CrossRefGoogle Scholar
  7. Bornmann, L., Wallon, G., & Ledin, A. (2008). Does the committee peer review select the best applicants for funding? An investigation of the selection process for two European molecular biology organization programmes.PLoS One, 3, e3480.CrossRefGoogle Scholar
  8. Bourne, C. P. (1977). Frequency and impact of spelling errors in bibliographic data bases. Information Processing and Management, 13, 1–12.CrossRefGoogle Scholar
  9. Daniel, H.-D. (1993/2004). Guardians of science. Fairness and reliability of peer review. Weinheim: Wiley Interscience.Google Scholar
  10. Daniel, H., & Bornmann, L. (2009). Universality of citation distributions—a validation of Radicchi et al.’s relative indicator c f = c/c o at the micro level using data from chemistry. Journal of the American Society for Information Science and Technology, 60, 1664–1670.CrossRefGoogle Scholar
  11. de S. Price, D. J. (1976). A general theory of bibliometric and other cumulative advantage processes. Journal of American Society of Information Science, 27, 292–306.CrossRefGoogle Scholar
  12. Eom, Y.-H., & Fortunato, S. (2011). Characterizing and modeling citation dynamics. Plos One, 6, e24926.CrossRefGoogle Scholar
  13. Evans, T. S. (2010). Clique graphs and overlapping communities. Journal of Statistical Mechanics, 2010, P12037.CrossRefGoogle Scholar
  14. Evans, T. S., & Lambiotte, R. (2009). Line graphs, link partitions and overlapping communities. Physics Review E, 80, 016105.CrossRefGoogle Scholar
  15. Evans, T. S., & Lambiotte, R. (2010). Line graphs of weighted networks for overlapping communities. The European Physical Journal B, 77, 265–272.CrossRefGoogle Scholar
  16. Fortunato, S. (2010). Community detection in graphs. Physics Reports, 486, 75–174.MathSciNetCrossRefGoogle Scholar
  17. Goldstone, R. L., Börner, K., & Maru, J. T. (2004). The simultaneous evolution of author and paper networks. Proceedings of the National Academy of Sciences of the USA, 101, 5266–-5273.CrossRefGoogle Scholar
  18. Gregory, S. (2011). Fuzzy overlapping communities in networks. Journal of Statistical Mechanics, 2011, P02017.CrossRefGoogle Scholar
  19. Hagstrom, W. O. (1971). Inputs, outputs, and the prestige of university science departments. Sociology of Education, 44, 375.CrossRefGoogle Scholar
  20. HEFCE report. (2012). “Assessment framework and guidance on submissions”, REF 02.2011, July 2011. Also “Part 2A Main Panel A Criteria”, and similarly named 2B and 2C documents. Retrieved February 15, 2012, from http://www.hefce.ac.uk/research/ref/.
  21. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the USA, 102, 16569.CrossRefGoogle Scholar
  22. Lariviere, V., & Gingras, Y. (2010). The impact factor’s Matthew effect: A natural experiment in bibliometrics. Journal of the American Society for Information Science and Technology, 61, 424–427.Google Scholar
  23. Leydesdorff, L., & Bensman, S. (2006). Classification and powerlaws: The logarithmic transformation. Journal of American Society of Information Scientists and Technologists, 57, 1470–1486.CrossRefGoogle Scholar
  24. Leydesdorff, L., & Opthof, T. (2011). Remaining problems with the “new crown indicator” (mncs) of the cwts. Journal of Informetrics, 5, 224–225.CrossRefGoogle Scholar
  25. Leydesdorff, L., & Rafols, I. (2009). Content-based and algorithmic classifications of journals: Perspectives on the dynamics of scientific communication and indexer effects. Journal of the American Society for Information Science and Technology, 60, 1–13.CrossRefGoogle Scholar
  26. Leydesdorff, L., Bornmann, L., Mutz, R., & Opthof, T. (2011). Turning the tables in citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology, 62, 1370–1381.Google Scholar
  27. Lillquist, E., & Green, S. (2010). The discipline dependence of citation statistics. Scientometrics, 84, 749.CrossRefGoogle Scholar
  28. Lundberg, J. (2007). Lifting the crown-citation z-score. Journal of Informetrics, 1, 145–154.CrossRefGoogle Scholar
  29. Moed, H. F. (2005). Citation analysis in research evaluation. Berlin: Springer.Google Scholar
  30. Moed, H., De Bruin, R., & van Leeuwen, Th. (1995). New bibliometric tools for the assessment of national research performance: Database description, overview of indicators and first applications. Scientometrics, 33, 381–422.CrossRefGoogle Scholar
  31. Nicolaisen, J., & Frandsen, T. F. (2008). The reference return ratio. Journal of Informetrics, 2, 128CrossRefGoogle Scholar
  32. Radicchi, F., & Castellano, C. (2009). On the fairness of using relative indicators for comparing citation performance in different disciplines. Scientometrics, 57, 85–90.Google Scholar
  33. Radicchi, F., & Castellano, C. (2011). Rescaling citations of publications in physics. Physics Review E, 83, 46–116.Google Scholar
  34. Radicchi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: Toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences of the USA, 105, 17268–17272.CrossRefGoogle Scholar
  35. Samukhin, A. N., Dorogovtsev, S. N., & Mendes, J. F. F. (2000). Structure of growing networks with preferential linking. Physical Review Letters, 85, 4633–4636.CrossRefGoogle Scholar
  36. Schubert, A., & Braun, T. (1986). Relative indicators and relational charts for comparative assessment of publication output and citation impact. Scientometrics, 9, 281–291.CrossRefGoogle Scholar
  37. Seglen, P. O. (1992). The Skewness of Science. Journal of the American Society for Information Science, 43, 628.CrossRefGoogle Scholar
  38. Thomson Reuters. (2009). Web of science. http://www.isiknowledge.com. Accessed March 2011.
  39. van Raan, A., Moed, H., & van Leeuwen, T. (2007). Scoping study on the use of bibliometric analysis to measure the quality of research in UK higher education institutions. Report to HEFCE by the Centre for Science and Technology Studies, Leiden University, November 2007.Google Scholar
  40. Vinkler, P. (1986). Evaluation of some methods for the relative assessment of scientific publications. Scientometrics, 10, 157–177.CrossRefGoogle Scholar
  41. Vinkler, P. (1997). Relations of relative scientometric impact indicators. The relative publication strategy index. Scientometrics, 40, 163–169.CrossRefGoogle Scholar
  42. Waltman, L., van Eck, N. J., van Leeuwen, T. N., Visser, M. S., & van Raan, A. F. J. (2011). Towards a new crown indicator: Some theoretical considerations. Journal of Informetrics, 5, 37–47.CrossRefGoogle Scholar
  43. Waltman, L., van Eck, N. J., & van Raan, A. F. J. (2012). Universality of citation distributions revisited. The Journal of American Society of Information Science, 1, 72.CrossRefGoogle Scholar
  44. Yanovsky, V. (1981). Citation analysis significance of scientific journals. Scientometrics, 3, 223.CrossRefGoogle Scholar
  45. Yu, D., Wang, M., & Yu, G. (2009). Effect of the age of papers on the preferential attachment in citation networks. Physica A, 388, 4273–4276.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2012

Authors and Affiliations

  1. 1.Department of PhysicsImperial CollegeLondonUK

Personalised recommendations