, Volume 107, Issue 3, pp 941–961 | Cite as

How does prolific professors influence on the citation impact of their university departments?

  • Fredrik Niclas PiroEmail author
  • Kristoffer Rørstad
  • Dag W. Aksnes


Professors and associate professors (“professors”) in full-time positions are key personnel in the scientific activity of university departments, both in conducting their own research and in their roles as project leaders and mentors to younger researchers. Typically, this group of personnel also contributes significantly to the publication output of the departments, although there are also major contributions by other staff (e.g. PhD-students, postdocs, guest researchers, students and retired personnel). The scientific productivity is however, very skewed at the level of individuals, also for professors, where a small fraction of the professors, typically account for a large share of the publications. In this study, we investigate how the productivity profile of a department (i.e. the level of symmetrical/asymmetrical productivity among professors) influences on the citation impact of their departments. The main focus is on contributions made by the most productive professors. The findings imply that the impact of the most productive professors differs by scientific field and the degree of productivity skewness of their departments. Nevertheless, the overall impact of the most productive professors on their departments’ citation impact is modest.


Research productivity Productivity distribution Professors 


  1. Abramo, G., Cicero, T., & D’Angelo, C. A. (2012). Revising size effects in higher education research productivity. Higher Education, 63, 701–717.CrossRefGoogle Scholar
  2. Abramo, G., Cicero, T., & D’Angelo, C. A. (2013). The impact of unproductive and top researchers on overall university research performance. Journal of Informetrics, 7, 166–175.CrossRefGoogle Scholar
  3. Abramo, G., Cicero, T., & D’Angelo, C. A. (2014). Are the authors of highly cited papers also the most productive ones? Journal of Informetrics, 8, 89–97.CrossRefGoogle Scholar
  4. Aksnes, D. W. (2005). Citations and their use as indicators in science policy. Studies of validity and applicability issues with a particular focus on highly cited papers. Dissertation for the doctoral degree, University of Twente, Twente.Google Scholar
  5. Aksnes, D. W., & Mikki, S. (2014). Vitenskapelig publisering ved Universitetet i Bergen. Statistikk og indikatorer 2014 (Scientific publishing at the University of Bergen. Statistics and Indicators 2014). Bergen: Universitetsbiblioteket i Bergen.Google Scholar
  6. Aksnes, D. W., & Rørstad, K. (2015). Publication rate expressed by age, gender and academic position—A large-scale analysis of Norwegian academic staff. Journal of Informetrics, 9(2), 317–333.CrossRefGoogle Scholar
  7. Aksnes, D. W., Rørstad, K., Piro, F., & Sivertsen, G. (2011). Are female researchers less cited? A large-scale study of norwegian scientists. Journal of the American Society for Information Science and Technology, 62(4), 628–636.CrossRefGoogle Scholar
  8. Alliance University. (2011). Funding research excellence: Research group size, critical mass and performance. London: University Alliance/Evidence.Google Scholar
  9. Baccini, A., Barabesi, M., Cioni, M., & Pisani, C. (2014). Crossing the hurdle: The determinants of individual scientific impact. Scientometrics, 101, 2035–2062.CrossRefGoogle Scholar
  10. Bland, C. J., Center, B. A., Finstad, D. A., Risbey, K. R., & Staples, J. G. (2005). A theoretical, practical, predictive model of faculty and department research productivity. Academic Medicine, 80(3), 225–237.CrossRefGoogle Scholar
  11. Costas, R., Bordons, M., van Leeuwen, T. N., & van Raan, A. F. J. (2009). Scaling rules in the science system: Influence of field-specific citation characteristics on the impact of individual researchers. Journal of the American Society for Information Science and Technology, 60(4), 740–753.CrossRefGoogle Scholar
  12. Doane, D. P., & Seward, L. E. (2011). Measuring skewness: A forgotten statistic? Journal of Statistics Education, 19, 2.Google Scholar
  13. George, D., & Mallery, P. (2010). SPSS for windows step by step: A simple study guide and reference, 17.0 update, 10/E. Boston: Pearson.Google Scholar
  14. Gingras, Y., Lariviere, V., Macaluso, B., & Robitaille, J. P. (2008). The effects of aging on researchers’ publication and citation patterns. PLoS ONE, 3(12), e4048.CrossRefGoogle Scholar
  15. Katz, J. S. (1999). The self-similar science system. Research Policy, 28(5), 501–517.CrossRefGoogle Scholar
  16. Katz, J. S. (2000). Institutional recognition. Scale independent indicators and research evaluation. Science and Public Policy, 27(1), 23–36.CrossRefGoogle Scholar
  17. Kenna, R., & Berche, B. (2011). Critical mass and the dependency of research quality on group size. Scientometrics, 86(2), 527–540.CrossRefGoogle Scholar
  18. Kyvik, S. (1989). Productivity differences, fields of learning, and Lotka’s law. Scientometrics, 15, 205–214.CrossRefGoogle Scholar
  19. Kyvik, S. (1991). Productivity in academia. Scientific publishing at Norwegian Universities. Oslo: Universitetsforlaget.Google Scholar
  20. Kyvik, S. (1995). Are big university departments better than small ones? Higher Education, 30(3), 295–304.CrossRefGoogle Scholar
  21. Larivière, V., Macaluso, B., Archambault, E., & Gingras, Y. (2010). Which scientific elites? On the concentration of research funds, publications and citations. Research Evaluation, 19(1), 45–53.CrossRefGoogle Scholar
  22. Long, S. J., & McGinnis, R. (1981). Organizational context and scientific productivity. American Sociological Review, 46, 422–442.CrossRefGoogle Scholar
  23. Lotka, A. J. (1926). The frequency distribution of scientific productivity. Journal of the Washington Academy of Sciences, 16(12), 317–324.Google Scholar
  24. Lundberg, J. (2007). Lifting the crown—Citation z-score. Journal of Infometrics, 1, 145–154.CrossRefGoogle Scholar
  25. Perianes-Rodriguez, A., & Ruiz-Castillo, J. (2015). Within- and between-department variability in individual productivity: The case of economics. Scientometrics, 102, 1497–1520.CrossRefGoogle Scholar
  26. Piro, F. N., Aksnes, D. W., & Rørstad, K. (2013). A macro analysis of productivity differences across fields: Challenges in the measurement of scientific publishing. Journal of the American Society for Information Science and Technology, 64(2), 307–320.CrossRefGoogle Scholar
  27. Price, D. J. D. S. (1965). Network of scientific papers. Science, 149, 510–515.CrossRefGoogle Scholar
  28. Ruiz-Castillo, J., & Costas, R. (2014). The skewness of scientific productivity. Journal of Informetrics, 8, 917–934.CrossRefGoogle Scholar
  29. Schneider, J. W. (2009). An outline of the bibliometric indicator used for performance-based funding of research institutions in Norway. European Political Science, 8, 364–378.CrossRefGoogle Scholar
  30. Sinclair, J., Barnacle, R., & Cuthbert, D. (2014). How the doctorate contributes to the formation of active researchers: What research tells us. Studies in Higher Education, 39(10), 1972–1986.CrossRefGoogle Scholar
  31. Smeby, J. C., & Try, S. (2005). Departmental contexts and faculty research activity in Norway. Research in Higher Education, 46(6), 593–619.CrossRefGoogle Scholar
  32. van Raan, A. F. J. (2008). Scaling rules in the science system: Influence of field-specific citation characteristics on the impact of research groups. Journal of the American Society for Information Science and Technology, 59(4), 565–576.CrossRefGoogle Scholar
  33. von Tunzelmann, N., Ranga, M., Martin, B., & Guena, A. (2003). The effects of size on research performance: A SPRU review. Report prepared for the Office of Science and Technology, Department of Trade and Industry (June 2003). Brighton: University of Sussex/SPRU (Science and Technology Policy Research).Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2016

Authors and Affiliations

  1. 1.Nordic Institute for Studies in Innovation, Research and Education (NIFU)OsloNorway

Personalised recommendations