, Volume 114, Issue 3, pp 859–882 | Cite as

The conundrum of research productivity: a study on sociologists in Italy

  • Aliakbar AkbaritabarEmail author
  • Niccolò Casnici
  • Flaminio Squazzoni


This paper aims to understand the influence of institutional and organisational embeddedness on research productivity of Italian sociologists. We looked at all records published by Italian sociologists in Scopus from 1973 to 2016 and reconstructed their co-authorship patterns. We built an individual productivity index by considering the number and type of records, the impact factor of journals in which these records were published and each record’s citations. We found that sociologists who co-authored more frequently with international authors were more productive and that having a stable group of co-authors had a positive effect on the number of publications but not on citations. We found that organisational embeddedness has a positive effect on productivity at the group level (i.e., sociologists working in the same institute), less at the individual level. We did not found any effect of the scientific disciplinary sectors, which are extremely influential administratively and politically for promotion and career in Italy. With all caveats due to several limitations of our analysis, our findings suggest that internationalisation and certain context-specific organisational settings could promote scientist productivity .


Sociologists Italy Research productivity Internationalisation Co-authorship 


  1. Abramo, G., Cicero, T., & D’Angelo, C. A. (2013). Individual research performance: A proposal for comparing apples to oranges. Journal of Informetrics, 7(2), 528–539. Scholar
  2. Abramo, G., & D’Angelo, C. A. (2011a). Evaluating research: From informed peer review to bibliometrics. Scientometrics, 87(3), 499–514.CrossRefGoogle Scholar
  3. Abramo, G., & D’Angelo, C. A. (2011b). National-scale research performance assessment at the individual level. Scientometrics, 86(2), 347–364. Scholar
  4. Abramo, G., & D’Angelo, C. A. (2014). How do you define and measure research productivity? Scientometrics, 101(2), 1129–1144. Scholar
  5. Abramo, G., D’Angelo, C. A., & Caprasecca, A. (2009). Gender differences in research productivity: A bibliometric analysis of the Italian academic system. Scientometrics, 79(3), 517–539.CrossRefGoogle Scholar
  6. Abramo, G., D’Angelo, C. A., & Di Costa, F. (2008). Assessment of sectoral aggregation distortion in research productivity measurements. Research Evaluation, 17(2), 111–121. Retrieved from
  7. Abramo, G., D’Angelo, C. A., & Di Costa, F. (2011). Research productivity: Are higher academic ranks more productive than lower ones? Scientometrics, 88(3), 915–928.CrossRefGoogle Scholar
  8. Abramo, G., D’Angelo, C. A., & Di Costa, F. (2017). The effects of gender, age and academic rank on research diversification. Scientometrics, 1–15.Google Scholar
  9. Abramo, G., D’Angelo, C. A., & Rosati, F. (2016a). A methodology to measure the effectiveness of academic recruitment and turnover. Journal of Informetrics, 10(1), 31–42.CrossRefGoogle Scholar
  10. Abramo, G., D’Angelo, C. A., & Rosati, F. (2016b). The north–south divide in the Italian higher education system. Scientometrics, 109(3), 2093–2117. Scholar
  11. Agrawal, A., McHale, J., & Oettl, A. (2017). How stars matter: Recruiting and peer effects in evolutionary biology. Research Policy, 46(4), 853–867.CrossRefGoogle Scholar
  12. ANVUR. (2014). Confronto tra dimensione e qualita delle strutture universita. Retrieved from
  13. Azoulay, P., Ganguli, I., & Zivin, J. G. (2017). The mobility of elite life scientists: Professional and personal determinants. Research Policy, 46(3), 573–590.CrossRefGoogle Scholar
  14. Baccini, A., & De Nicolao, G. (2016). Do they agree? Bibliometric evaluation versus informed peer review in the Italian research assessment exercise. Scientometrics, 108(3), 1651–1671.CrossRefGoogle Scholar
  15. Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. Scholar
  16. Batista, P. D., Campiteli, M. G., & Kinouchi, O. (2006). Is it possible to compare researchers with different scientific interests? Scientometrics, 68(1), 179–189.CrossRefGoogle Scholar
  17. Becher, T., & Trowler, P. (2001). Academic tribes and territories: Intellectual enquiry and the culture of disciplines. London: McGraw-Hill Education.Google Scholar
  18. Beerkens, M. (2013). Facts and fads in academic research management: The effect of management practices on research productivity in australia. Research Policy, 42(9), 1679–1693.CrossRefGoogle Scholar
  19. Bellotti, E., Guadalupi, L., & Conaldi, G. (2016a). Comparing fields of sciences: Multilevel networks of research collaborations in Italian Academia. In Multilevel network analysis for the social sciences (pp. 213–244). Springer.Google Scholar
  20. Bellotti, E., Kronegger, L., & Guadalupi, L. (2016b). The evolution of research collaboration within and across disciplines in Italian Academia. Scientometrics, 109(2), 783–811. Scholar
  21. Berlemann, M., & Haucap, J. (2015). Which factors drive the decision to opt out of individual research rankings? An empirical study of academic resistance to change. Research Policy, 44(5), 1108–1115.CrossRefGoogle Scholar
  22. Blackburn, R. T., Behymer, C. E., & Hall, D. E. (1978). Research note: Correlates of faculty publications. Sociology of Education, 132–141.Google Scholar
  23. Bland, C. J., Center, B. A., Finstad, D. A., Risbey, K. R., & Staples, J. G. (2005). A theoretical, practical, predictive model of faculty and department research productivity. Academic Medicine, 80(3), 225–237.CrossRefGoogle Scholar
  24. Bland, C. J., Ruffin, M. T., et al. (1992). Characteristics of a productive research environment: Literature review. Academic Medicine, 67(6), 385–397.CrossRefGoogle Scholar
  25. Bland, C. J., Seaquist, E., Pacala, J. T., Center, B., & Finstad, D. (2002). One school’s strategy to assess and improve the vitality of its faculty. Academic Medicine, 77(5), 368–376.CrossRefGoogle Scholar
  26. Bornmann, L. (2010). Towards an ideal method of measuring research performance: Some comments to the Opthof and Leydesdorff (2010) paper. Journal of Informetrics, 4(3), 441–443. Scholar
  27. Burrows, R. (2012). Living with the h-index? Metric assemblages in the contemporary academy. The Sociological Review, 60(2), 355–372.CrossRefGoogle Scholar
  28. Burt, R. S. (2005). Brokerage and closure: An introduction to social capital. Oxford: Oxford University Press.Google Scholar
  29. Butts, C. T. (2016). Sna: Tools for social network analysis. Retrieved from
  30. Chatzimichael, K., Kalaitzidakis, P., & Tzouvelekas, V. (2016). Measuring the publishing productivity of economics departments in Europe. Scientometrics, 1–20.Google Scholar
  31. Coile, R. C. (1977). Lotka’s frequency distribution of scientific productivity. Journal of the American Society for Information Science, 28(6), 366–370.CrossRefGoogle Scholar
  32. Csardi, G., & Nepusz, T. (2006). The igraph software package for complex network research. InterJournal, Complex Systems, 1695. Retrieved from
  33. de Price, D. J. S. (1970). Citation measures of hard science, soft science, technology, and nonscience. In C. E. Nelson & D. K. Pollock (Eds.), Communication among scientists and engineers (pp. 3–22). Lexington, MA: Heath.Google Scholar
  34. De Rijcke, S., Wouters, P. F., Rushforth, A. D., Franssen, T. P., & Hammarfelt, B. (2016). Evaluation practices and effects of indicator use—a literature review. Research Evaluation, 25(2), 161–169.CrossRefGoogle Scholar
  35. De Stefano, D., Fuccella, V., Vitale, M. P., & Zaccarin, S. (2013). The use of different data sources in the analysis of co-authorship networks and scientific performance. Social Networks, 35(3), 370–381.CrossRefGoogle Scholar
  36. Edwards, M. A., & Roy, S. (2017). Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science, 34(1), 51–61.CrossRefGoogle Scholar
  37. Egghe, L. (2010). The hirsch index and related impact measures. Annual Review of Information Science and Technology, 44(1), 65–114.CrossRefGoogle Scholar
  38. Ellwein, L. B., Khachab, M., & Waldman, R. (1989). Assessing research productivity: Evaluating journal publication across academic departments. Academic Medicine, 64(6), 319–325.CrossRefGoogle Scholar
  39. Faraway, J. L. (2005). Extending the linear model with r: Generalized linear, mixed effects an nonparametric regression models. Cambridge: CRC Press.zbMATHGoogle Scholar
  40. Fox, M. F. (1983). Publication productivity among scientists: A critical review. Social Studies of Science, 13(2), 285–305.CrossRefGoogle Scholar
  41. Garfield, E. (1980). Premature discovery or delayed recognition-why. Current Contents, 21, 5–10.Google Scholar
  42. Geuna, A., & Piolatto, M. (2016). Research assessment in the UK and Italy: Costly and difficult, but probably worth it (at least for a while). Research Policy, 45(1), 260–271.CrossRefGoogle Scholar
  43. Hakala, J., & Ylijoki, O.-H. (2001). Research for whom? Research orientations in three academic cultures. Organization, 8(2), 373–380.CrossRefGoogle Scholar
  44. Halevi, G., Moed, H., & Bar-Ilan, J. (2017). Suitability of Google scholar as a source of scientific information and as a source of data for scientific evaluation—Review of the literature. Journal of Informetrics, 11(3), 823–834.CrossRefGoogle Scholar
  45. Hâncean, M.-G., & Perc, M. (2016). Homophily in coauthorship networks of east European sociologists. Scientific Reports, 6, 36152.CrossRefGoogle Scholar
  46. Hancock, K. J., & Baum, M. (2010). Women and academic publishing: Preliminary results from a survey of the ISA membership. In The international studies association annual convention, new orleans, la.Google Scholar
  47. Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). The Leiden manifesto for research metrics. Nature, 520(7548), 429.CrossRefGoogle Scholar
  48. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102, 16569–16572.CrossRefzbMATHGoogle Scholar
  49. Hirsch, J. E. (2010). An index to quantify an individual’s scientific research output that takes into account the effect of multiple coauthorship. Scientometrics, 85(3), 741–754.CrossRefGoogle Scholar
  50. Hlavac, M. (2015). Stargazer: Well-formatted regression and summary statistics tables. Cambridge, USA: Harvard University. Retrieved from
  51. Jonkers, K., & Tijssen, R. (2008). Chinese researchers returning home: Impacts of international mobility on research collaboration and scientific productivity. Scientometrics, 77(2), 309–333.CrossRefGoogle Scholar
  52. Jung, J., Bozeman, B., & Gaughan, M. (2017). Impact of research collaboration cosmopolitanism on job satisfaction. Research Policy, 46, 1863–1872.CrossRefGoogle Scholar
  53. Katz, J. S., & Martin, B. R. (1997). What is research collaboration? Research Policy, 26(1), 1–18.CrossRefGoogle Scholar
  54. Khabsa, M., & Giles, C. L. (2014). The number of scholarly documents on the public web. PLoS ONE, 9(5), e93949.CrossRefGoogle Scholar
  55. Khor, K. A., & Yu, L. G. (2016). Influence of international coauthorship on the research citation impact of young universities. Scientometrics, 107(3), 1095–1110.CrossRefGoogle Scholar
  56. Kronegger, L., Mali, F., Ferligoj, A., & Doreian, P. (2011). Collaboration structures in Slovenian scientific communities. Scientometrics, 90(2), 631–647.CrossRefGoogle Scholar
  57. Kuzhabekova, A. (2011). Impact of co-authorship strategies on research productivity: A social-network analysis of publications in russian cardiology (PhD thesis). University of Minnesota.Google Scholar
  58. Lamont, M. (2009). How professors think. Cambridge: Harvard University Press.CrossRefGoogle Scholar
  59. Lazega, E., Jourda, M.-T., Mounier, L., & Stofer, R. (2008). Catching up with big fish in the big pond? Multi-level network analysis through linked design. Social Networks, 30(2), 159–176.CrossRefGoogle Scholar
  60. Leenders, R. T. A. (2002). Modeling social influence through network autocorrelation: Constructing the weight matrix. Social Networks, 24(1), 21–47.CrossRefGoogle Scholar
  61. Leydesdorff, L., Park, H. W., & Wagner, C. (2014). International coauthorship relations in the social sciences citation index: Is internationalization leading the network? Journal of the Association for Information Science and Technology, 65(10), 2111–2126.CrossRefGoogle Scholar
  62. Long, J. S. (1978). Productivity and academic position in the scientific career. American Sociological Review, 889–908.Google Scholar
  63. Long, J. S., & McGinnis, R. (1981). Organizational context and scientific productivity. American Sociological Review, 422–442.Google Scholar
  64. Meho, L. I., & Yang, K. (2007). Impact of data sources on citation counts and rankings of LIS faculty: Web of Science versus Scopus and Google Scholar. Journal of the Association for Information Science and Technology, 58(13), 2105–2125.Google Scholar
  65. Narin, F., Stevens, K., & Whitlow, E. (1991). Scientific co-operation in Europe and the citation of multinationally authored papers. Scientometrics, 21(3), 313–323.CrossRefGoogle Scholar
  66. Narin, F., & Whitlow, E. S. (1991). Measurement of scientific cooperation and coauthorship in cec-related areas of science. Commission of the European Communities Directorate-General Telecommunications, Information Industries and Innovation.Google Scholar
  67. National agency for the evaluation of the university and research systems. (2013). Retrieved from
  68. Nederhof, A. J. (2006). Bibliometric monitoring of research performance in the social sciences and the humanities: A review. Scientometrics, 66(1), 81–100.MathSciNetCrossRefGoogle Scholar
  69. Nygaard, L. P. (2015). Publishing and perishing: An academic literacies framework for investigating research productivity. Studies in Higher Education, 1–14.Google Scholar
  70. Opthof, T., & Leydesdorff, L. (2010). Caveats for the journal and field normalizations in the CWTS (“Leiden”) evaluations of research performance. Journal of Informetrics, 4(3), 423–430. Scholar
  71. Pepe, A., & Kurtz, M. J. (2012). A measure of total research impact independent of time and discipline. PLoS ONE, 7(11), e46428.CrossRefGoogle Scholar
  72. Provasi, G., Squazzoni, F., & Tosio, B. (2012). Did they sell their soul to the devil? Some comparative case-studies on academic entrepreneurs in the life sciences in Europe. Higher Education, 64(6), 805–829.CrossRefGoogle Scholar
  73. Prpić, K. (2002). Gender and productivity differentials in science. Scientometrics, 55(1), 27–58.CrossRefGoogle Scholar
  74. R Core Team. (2016). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from
  75. Ramsden, P. (1994). Describing and explaining research productivity. Higher Education, 28(2), 207–226.CrossRefGoogle Scholar
  76. Rumsey, A. R. (2006). The association between co-authorship network structures and successful academic publishing among higher education scholars.Google Scholar
  77. Shapin, S. (2009). The scientific life: A moral history of a late modern vocation. Chicago: University of Chicago Press.Google Scholar
  78. Smith, M. (1958). The trend toward multiple authorship in psychology. American Psychologist, 13(10), 596.CrossRefGoogle Scholar
  79. Snijders, T., & Bosker, R. (1999). Multilevel analysis: An introduction to basic and advanced multilevel modeling. London: Sage.zbMATHGoogle Scholar
  80. Stack, S. (2004). Gender, children and research productivity. Research in Higher Education, 45(8), 891–920.CrossRefGoogle Scholar
  81. Stergiou, K. I., & Lessenich, S. (2014). On impact factors and university rankings: From birth to boycott. Ethics in Science and Environmental Politics, 13(2), 101–111.CrossRefGoogle Scholar
  82. Timmermans, S., & Epstein, S. (2010). A world of standards but not a standard world: Toward a sociology of standards and standardization. Annual Review of Sociology, 36, 69–89.CrossRefGoogle Scholar
  83. Turri, M. (2014). The new Italian agency for the evaluation of the university system (anvur): A need for governance or legitimacy? Quality in Higher Education, 20(1), 64–82.CrossRefGoogle Scholar
  84. van der Loo, M. (2014). The stringdist package for approximate string matching. The R Journal, 6(1), 111–122. Retrieved from
  85. Weick, K. E. (2016). Perspective construction in organizational behavior. Annual Review of Organizational Psychology and Organizational Behavior, (0).Google Scholar
  86. Whitley, R. (2003). Competition and pluralism in the public sciences: The impact of institutional frameworks on the organisation of academic science. Research Policy, 32(6), 1015–1029.CrossRefGoogle Scholar
  87. Wickham, H. (2009). Ggplot2: Elegant graphics for data analysis. Springer-Verlag New York. Retrieved from
  88. Wickham, H., & Francois, R. (2016). Dplyr: A grammar of data manipulation. Retrieved from
  89. Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., … others. (2015). The metric tide: Report of the independent review of the role of metrics in research assessment and management. hefce.Google Scholar
  90. Zuur, A., Ieno, E., Walker, N., Saveliev, A., & Smith, G. (2009). Mixed effects models and extensions in ecology with r. gail m, krickeberg k, samet jm, tsiatis a, wong w, editors. New York, NY: Spring Science and Business Media.Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2017

Authors and Affiliations

  1. 1.Universita degli Studi di BresciaBresciaItaly

Personalised recommendations