Scientometrics

, Volume 108, Issue 1, pp 337–347 | Cite as

Interpreting correlations between citation counts and other indicators

Article

Abstract

Altmetrics or other indicators for the impact of academic outputs are often correlated with citation counts in order to help assess their value. Nevertheless, there are no guidelines about how to assess the strengths of the correlations found. This is a problem because the correlation strength affects the conclusions that should be drawn. In response, this article uses experimental simulations to assess the correlation strengths to be expected under various different conditions. The results show that the correlation strength reflects not only the underlying degree of association but also the average magnitude of the numbers involved. Overall, the results suggest that due to the number of assumptions that must be made, in practice it will rarely be possible to make a realistic interpretation of the strength of a correlation coefficient.

Keywords

Citation analysis Correlation Altmetrics Indicators Discretised lognormal Simulation 

References

  1. Ahlgren, P., & Waltman, L. (2014). The correlation between citation-based and expert-based assessments of publication channels: SNIP and SJR vs. Norwegian quality assessments. Journal of Informetrics, 8(4), 985–996.CrossRefGoogle Scholar
  2. Ajiferuke, I., & Famoye, F. (2015). Modelling count response variables in informetric studies: Comparison among count, linear, and lognormal regression models. Journal of Informetrics, 9(3), 499–513.CrossRefGoogle Scholar
  3. Bosquet, C., & Combes, P. P. (2013). Are academics who publish more also more cited? Individual determinants of publication and citation records. Scientometrics, 97(3), 831–857.CrossRefGoogle Scholar
  4. Brzezinski, M. (2015). Power laws in citation distributions: Evidence from Scopus. Scientometrics, 103(1), 213–228.CrossRefGoogle Scholar
  5. Chakraborty, T., Tammana, V., Ganguly, N., & Mukherjee, A. (2015). Understanding and modeling diverse scientific careers of researchers. Journal of Informetrics, 9(1), 69–78.CrossRefGoogle Scholar
  6. Clauset, A., Shalizi, C. R., & Newman, M. E. (2009). Power-law distributions in empirical data. SIAM Review, 51(4), 661–703.MathSciNetCrossRefMATHGoogle Scholar
  7. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Abingdon: Lawrence Erlbaum Associates.MATHGoogle Scholar
  8. Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159. doi:10.1037/0033-2909.112.1.155.CrossRefGoogle Scholar
  9. Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334.CrossRefGoogle Scholar
  10. Didegah, F., & Thelwall, M. (2013). Which factors help authors produce the highest impact research? Collaboration, journal and document properties. Journal of Informetrics, 7(4), 861–873.CrossRefGoogle Scholar
  11. Ellis, P. D. (2010). The essential guide to effect sizes: Statistical power, meta-analysis, and the interpretation of research results. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  12. Eom, Y. H., & Fortunato, S. (2011). Characterizing and modeling citation dynamics. PLoS ONE, 6(9), e24926.CrossRefGoogle Scholar
  13. Ettori, S. (2015). The physics inside the scaling relations for X-ray galaxy clusters: Gas clumpiness, gas mass fraction and slope of the pressure profile. Monthly Notices of the Royal Astronomical Society, 446(3), 2629–2639.CrossRefGoogle Scholar
  14. Finardi, U. (2013). Correlation between journal impact factor and citation performance: An experimental study. Journal of Informetrics, 7(2), 357–370.CrossRefGoogle Scholar
  15. Franceschet, M., & Costantini, A. (2011). The first Italian research assessment exercise: A bibliometric perspective. Journal of Informetrics, 5(2), 275–291.CrossRefGoogle Scholar
  16. Garanina, O. S., & Romanovsky, M. Y. (2015). Citation distribution of individual scientist: Approximations of stretch exponential distribution with power law tails. In A. A. Salah, Y. Tonta, A. A. Akdag Salah, C. Sugimoto, & U. Al (Eds.), Proceedings of ISSI 2015 (pp. 272–277). Turkey: Bogaziçi University Printhouse.Google Scholar
  17. Gillespie, C.S. (2015). Fitting heavy tailed distributions: the poweRlaw package. Journal of Statistical Software, 64(2), 1–16. http://www.jstatsoft.org/v64/i02/paper.
  18. Hartley, J., & Sydes, M. (1997). Are structured abstracts easier to read than traditional ones? Journal of Research in Reading, 20(2), 122–136.CrossRefGoogle Scholar
  19. HEFCE. (2015). The metric tide: Correlation analysis of REF2014 scores and metrics. Supplementary Report II to the Independent review of the role of metrics in research assessment and management. Bristol: Hefce. http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html.
  20. Hemphill, J. F. (2003). Interpreting the magnitudes of correlation coefficients. American Psychologist, 58(1), 78–79.CrossRefGoogle Scholar
  21. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569–16572.CrossRefGoogle Scholar
  22. Hyland, K. (1999). Academic attribution: Citation and the construction of disciplinary knowledge. Applied Linguistics, 20(3), 341–367.MathSciNetCrossRefGoogle Scholar
  23. Kostoff, R. (2007). The difference between highly and poorly cited medical articles in the journal Lancet. Scientometrics, 72, 513–520.CrossRefGoogle Scholar
  24. Kousha, K., & Thelwall, M. (2015). Web indicators for research evaluation, part 3: Books and non-standard outputs. El Profesional de la Información, 24(6), 724–736. doi:10.3145/epi.2015.nov.04.CrossRefGoogle Scholar
  25. Larivière, V., & Gingras, Y. (2010). On the relationship between interdisciplinarity and scientific impact. Journal of the American Society for Information Science and Technology, 61, 126–131.CrossRefGoogle Scholar
  26. Limpert, E., Stahel, W. A., & Abbt, M. (2001). Lognormal distribution across sciences: Key and clues. BioScience, 51(5), 341–351.CrossRefGoogle Scholar
  27. Lipsey, M.W., Puzio, K., Yun, C., Hebert, M.A., Steinka-Fry, K., Cole, M.W., et al. (2012). Translating the statistical representation of the effects of education interventions into more readily interpretable forms. Washington, DC: US Dept of Education, National Center for Special Education Research, Institute of Education Sciences, NCSER 2013-3000.Google Scholar
  28. Liu, G., Qi, X. L., Robert, N., Dick, A. J., & Wright, G. A. (2012). Ultrasound-guided identification of cardiac imaging windows. Medical Physics, 39(6), 3009–3018.CrossRefGoogle Scholar
  29. Low, W. J., Thelwall, M., & Wilson, P. (2015). Stopped sum models for citation data. In A. A. Salah, Y. Tonta, A. A. AkdagSalah, C. Sugimoto, & U. Al (Eds.), Proceedings of ISSI 2015 Istanbul: 15th international society of scientometrics and informetrics conference (pp. 184–194). Istanbul: Bogaziçi University Printhouse.Google Scholar
  30. Mohammadi, E., & Thelwall, M. (2014). Mendeley readership altmetrics for the social sciences and humanities: Research evaluation and knowledge flows. Journal of the American Society for Information Science and Technology, 65(8), 1627–1638.CrossRefGoogle Scholar
  31. Onodera, N., & Yoshikane, F. (2015). Factors affecting citation rates of research articles. Journal of the Association for Information Science and Technology, 66(4), 739–764.CrossRefGoogle Scholar
  32. Oppenheim, C. (2000). Do patent citations count? In B. Cronin & H. B. Atkins (Eds.), The web of knowledge: A festschrift in honor of Eugene Garfield (pp. 405–432). Metford: Information Today Inc. ASIS Monograph Series.Google Scholar
  33. Pennock, D. M., Flake, G. W., Lawrence, S., Glover, E. J., & Giles, C. L. (2002). Winners don’t take all: Characterizing the competition for links on the web. Proceedings of the National Academy of Sciences, 99(8), 5207–5211.CrossRefMATHGoogle Scholar
  34. Persson, O., Glänzel, W., & Danell, R. (2004). Inflationary bibliometric values: The role of scientific collaboration and the need for relative indicators in evaluative studies. Scientometrics, 60(3), 421–432.CrossRefGoogle Scholar
  35. Radicchi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: Toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences, 105(45), 17268–17272.CrossRefGoogle Scholar
  36. Redner, S. (1998). How popular is your paper? An empirical study of the citation distribution. The European Physical Journal B-Condensed Matter and Complex Systems, 4(2), 131–134.CrossRefGoogle Scholar
  37. Sud, P., & Thelwall, M. (2014). Evaluating altmetrics. Scientometrics, 98(2), 1131–1143. doi:10.1007/s11192-013-1117-2.CrossRefGoogle Scholar
  38. Thelwall, M. (2006). Interpreting social science link analysis research: A theoretical framework. Journal of the American Society for Information Science and Technology, 57(1), 60–68.CrossRefGoogle Scholar
  39. Thelwall, M. (2016). The discretised lognormal and hooked power law distributions for complete citation data: Best options for modelling and regression. Journal of Informetrics, 10(2), 336–346. doi:10.1016/j.joi.2015.12.007.CrossRefGoogle Scholar
  40. Thelwall, M., & Fairclough, R. (2015). The influence of time and discipline on the magnitude of correlations between citation counts and quality scores. Journal of Informetrics, 9(3), 529–541. doi:10.1016/j.joi.2015.05.006.CrossRefGoogle Scholar
  41. Thelwall, M., & Kousha, K. (2015a). Web indicators for research evaluation, Part 1: Citations and links to academic articles from the web. El Profesional de la Información, 24(5), 587–606. doi:10.3145/epi.2015.sep.08.CrossRefGoogle Scholar
  42. Thelwall, M., & Kousha, K. (2015b). Web indicators for research evaluation, Part 2: Social media metrics. El Profesional de la Información, 24(5), 607–620. doi:10.3145/epi.2015.sep.09.CrossRefGoogle Scholar
  43. Thelwall, M., & Wilson, P. (2014). Distributions for cited articles from individual subjects and years. Journal of Informetrics, 8(4), 824–839.CrossRefGoogle Scholar
  44. Thelwall, M., & Wilson, P. (in press). Mendeley readership altmetrics for medical articles: An analysis of 45 fields. Journal of the Association for Information Science and Technology. doi:10.1002/asi.23501.
  45. van Raan, A. (1998). The influence of international collaboration on the impact of research results: Some simple mathematical considerations concerning the role of self-citations. Scientometrics, 42(3), 423–428.CrossRefGoogle Scholar
  46. Wainer, J., & Vieira, P. (2013). Correlations between bibliometrics and peer evaluation for all disciplines: the evaluation of Brazilian scientists. Scientometrics, 96(2), 395–410.CrossRefGoogle Scholar
  47. Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., et al. (2015). The metric tide: Report of the independent review of the role of metrics in research assessment and management. http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html.

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2016

Authors and Affiliations

  1. 1.Statistical Cybermetrics Research GroupUniversity of WolverhamptonWolverhamptonUK

Personalised recommendations