, Volume 91, Issue 3, pp 911–924 | Cite as

Citation rates in mathematics: a study of variation by subdiscipline



Variation of citation counts by subdisciplines within a particular discipline is known but rarely systematically studied. This paper compares citation counts for award-winning mathematicians is different subdisciplines of mathematics. Mathematicians were selected for study in groups of rough equivalence with respect to peer evaluation, where this evaluation is given by the awarding of major prizes and grants: Guggenheim fellowships, Sloan fellowships, and National Science Foundation CAREER grants. We find a pattern in which mathematicians working in some subdisciplines have fewer citations than others who won the same award, and this pattern is consistent for all awards. So even after adjustment at the discipline level for different overall citation rates for disciplines, citation counts for different subdisciplines do not match peer evaluation. Demographic and hiring data for mathematics provides a context for a discussion of reasons and interpretations.


Citation analysis Mathematics Subdisciplines Peer evaluation Awards Awardees Grants Grantees 


  1. American Mathematical Society (AMS). (2011a). MR: Help. Retrieved August 2, 2011 from
  2. American Mathematical Society (AMS). (2011b). Annual survey of the mathematical sciences. Retrieved from
  3. American Mathematical Society (AMS). (2011c). Math reviews institution codes and addresses look up. Retrieved August 2, 2011 from
  4. Association of American Universities (AAU). (2000). AAU Membership Policy. Retrieved August 2, 2011 from
  5. Bensman, S. J. (2008). Distributional differences of the impact factor in the sciences versus the social sciences: An analysis of the probabilistic structure of the 2005 Journal Citation Reports. Journal of the American Society for Information Science and Technology, 60(6), 1097–1117.CrossRefGoogle Scholar
  6. Bensman, S. J., Smolinsky, L. J., & Pudovkin, A. I. (2010). Mean citation rate per article in mathematics journals: Differences from the scientific model. Journal of the American Society for Information Science, 61(2010), 1440–1463.Google Scholar
  7. Bornmann, L., & Daniel, H. D. (2006). What do citation counts measure? A review of studies on citing behavior. Journal of Documentation, 64(1), 45–80.CrossRefGoogle Scholar
  8. Bornmann, L., Mutz, R., Marx, W., Schier, H., & Daniel, H. D. (2011a). A multilevel modelling approach to investigating the predictive validity of editorial decisions: Do the editors of a high profile journal select the manuscripts that are highly cited after publication? Journal of the Royal Statistical Society A, 174(4), 857–879.MathSciNetCrossRefGoogle Scholar
  9. Bornmann, L., Schier, H., Marx, W., & Daniel, H. D. (2011b). Is interactive open access publishing able to identify high-impact submissions? A study on the predictive validity of atmospheric chemistry and physics by using percentile rank classes. Journal of the American Society for Information Science and Technology, 62(1), 61071.CrossRefGoogle Scholar
  10. Davis, P. M. (2009). Reward or persuasion? The battle to define the meaning of a citation. Learned Publishing, 22(1), 5–11.CrossRefGoogle Scholar
  11. Fairweather, G., & Wegner, B. (2009). For your information: mathematics subject classification 2010. Notices of the American Mathematical Society, 56(7), 848.Google Scholar
  12. Garfield, E. (1979). Citation indexing—Its theory and application in science, technology, and humanities. New York: Wiley.Google Scholar
  13. Glänzel, W., & Schubert, A. (2003). A new classification scheme of science field and subfields designed for scientometric valuation purposes. Scientometrics, 56(3), 357–367.CrossRefGoogle Scholar
  14. Grcar, J. F. (2010). Topical bias in generalist mathematics journals. Notices of the American Mathematical Society, 57(11), 1421–1424.MathSciNetMATHGoogle Scholar
  15. Lewison, G., & Dawson, G. (1998). The effect of funding on the outputs of biomedical research. Scientometrics, 41(1–2), 17–27.CrossRefGoogle Scholar
  16. MacRoberts, M. H., & MacRoberts, B. R. (1996). Problems of citation analysis. Scientometrics, 36(3), 435–444.CrossRefGoogle Scholar
  17. MacRoberts, M. H., & MacRoberts, B. R. (2010). Problems of citation analysis: A study of uncited and seldom-cited influences. Journal of the American Society for Information Science and Technology, 61(1), 1–13.CrossRefGoogle Scholar
  18. MathSciNet books. (2009). Retrieved August 2, 2011 from
  19. MathSciNet papers. (2009). Retrieved August 2, 2011 from
  20. Moed, H. (2005). Citation analysis in research evaluation. Dordrecht: Springer.Google Scholar
  21. National Research Council (NRC). (2010). A revised guide to the methodology of the data-based assessment of research-doctorate programs in the United States. Washington, DC: The National Academies Press.Google Scholar
  22. National Research Council (NRC). (2011a). A data-based assessment of research-doctorate Programs in the United States. Washington, DC: The National Academies Press.Google Scholar
  23. National Research Council (NRC). (2011b). Assessment of research doctorate programs. Awards and honors data Collection and Methodology. Retrieved August 2, 2011 from
  24. National Science Board (NSB). (2010). Science and Engineering Indicators 2010. Arlington: National Science Foundation (NSB 10-01).Google Scholar
  25. Nicolaisen, J. (2007). Citation analysis. Annual Review of Information Science and Technology, 41, 609–641.CrossRefGoogle Scholar
  26. Podlubny, I. (2005). Comparison of scientific impact expressed by the number of citations in different fields of science. Scientometrics, 64(1), 95–99.CrossRefGoogle Scholar
  27. Podlubny, I., & Kassayova, K. (2006). Towards a better list of citation superstars: compiling a multidisciplinary list of highly cited researchers. Research and Evaluation, 13(3), 154–162.CrossRefGoogle Scholar
  28. Podlubny, I., & Kassayova, K. (2011). The law of constant ratio. Letters to the editor. Notices of the American Mathematical Society, 58(5), 653–654.Google Scholar
  29. Radicchi, F., & Castellano, C. (2011). Rescaling citations of publications in physics. Physical Review E, 83(4), 046116.CrossRefGoogle Scholar
  30. Radicchi, F., Fortunato, S., & Castellano, C. (2008). Universality of citation distributions: Toward an object measure of scientific impact. Proceedings of the National Academy of Science of the United States, 105(45), 17268–17272.CrossRefGoogle Scholar
  31. Seglen, P. Q. (1992). The skewness of science. Journal of the American Society for Information Science, 43, 628–638.CrossRefGoogle Scholar
  32. Society for Industrial and Applied Mathematics (SIAM). (2011). Prizes, awards, and lectures sponsored by SIAM. Retrieved August 2, 2011 from
  33. Taborsky, M. (2009). Biased citation practice and taxonomic parochialism. Ethology, 115, 105–111.CrossRefGoogle Scholar
  34. Thomson Reuters. (2011). Science citation index expanded scope notes, Retrieved August 2, 2011 from

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2012

Authors and Affiliations

  1. 1.Department of MathematicsLouisiana State UniversityBaton RougeUSA
  2. 2.LSU LibrariesLouisiana State UniversityBaton RougeUSA

Personalised recommendations