Advertisement

Researchers’ risk-smoothing publication strategies: Is productivity the enemy of impact?

Article

Abstract

In the quest for balance between research productivity and impact, researchers in science and engineering are often encouraged to adopt a play-it-safe research and publication strategy that allows them to maintain high publication productivity and accelerate their career advancement but may reduce the likelihood of high impact or breakthrough research outcomes. In this paper, we analyze bibliometric data from Scopus and present results for the relationship between publication strategies, publishing productivity and citation-based publication impact for 227 full professors of chemistry and 148 professors of mechanical engineering at ten research-intensive universities in the United States. The results indicate some evidence for the “productivity as the enemy of impact” hypothesis in chemistry, where publishing at the higher margin of productivity leads to a stagnant or declining publication impact. Findings differ for mechanical engineering, where higher publishing productivity consistently leads to higher publication impact. We attribute the differences in findings between the disciplines to a higher propensity for productivity-focused publication strategies in chemistry than in mechanical engineering.

Keywords

Risk aversion Publication strategy Publication productivity Research strategy Citation impact 

Mathematics Subject Classification

62P25 

JEL Classification

I23 J24 O31 

Notes

Acknowledgements

We gratefully acknowledge the work of Craig Boardman who assisted with the conceptualization of the study and two anonymous reviewers for invaluable suggestions for improving this work.

References

  1. Abbott, A., Cyranoski, D., Jones, N., Maher, B., Schiermeier, Q., & Van Noorden, R. (2010). Metrics: Do metrics matter? Nature News, 465(7300), 860–862.CrossRefGoogle Scholar
  2. Abramo, G., Cicero, T., & D’Angelo, C. A. (2014). Are the authors of highly cited articles also the most productive ones? Journal of Informetrics, 8(1), 89–97.CrossRefGoogle Scholar
  3. Angelique, H., Kyle, K., & Taylor, E. (2002). Mentors and muses: New strategies for academic success. Innovative Higher Education, 26(3), 195–209.CrossRefGoogle Scholar
  4. Becher, T. (1994). The significance of disciplinary differences. Studies in Higher Education, 19(2), 151–161.CrossRefGoogle Scholar
  5. Bloch, C., Graversen, E. K., & Pedersen, H. S. (2014). Competitive research grants and their impact on career performance. Minerva, 52(1), 77–96.CrossRefGoogle Scholar
  6. Bornmann, L. (2011). Mimicry in science? Scientometrics, 86(1), 173–177.CrossRefGoogle Scholar
  7. Bornmann, L., & Daniel, H. D. (2007). Multiple publication on a single research study: Does it pay? The influence of number of research articles on total citation counts in biomedicine. Journal of the Association for Information Science and Technology, 58(8), 1100–1107.Google Scholar
  8. Bornmann, L., & Daniel, H. D. (2008). What do citation counts measure? A review of studies on citing behavior. Journal of documentation, 64(1), 45–80.CrossRefGoogle Scholar
  9. Bornmann, L., & Mutz, R. (2015). Growth rates of modern science: A bibliometric analysis based on the number of publications and cited references. Journal of the Association for Information Science and Technology, 66(11), 2215–2222.CrossRefGoogle Scholar
  10. Bosquet, C., & Combes, P. P. (2013). Are academics who publish more also more cited? Individual determinants of publication and citation records. Scientometrics, 97(3), 831–857.CrossRefGoogle Scholar
  11. Bozeman, B., & Boardman, C. (2014). Research collaboration and team science: A state-of-the-art review and agenda. Berlin: Springer.CrossRefGoogle Scholar
  12. Bozeman, B., Fay, D., & Slade, C. P. (2013). Research collaboration in universities and academic entrepreneurship: The-state-of-the-art. The Journal of Technology Transfer, 38(1), 1–67.CrossRefGoogle Scholar
  13. Bozeman, B., & Gaughan, M. (2007). Impacts of grants and contracts on academic researchers’ interactions with industry. Research Policy, 36, 694–707.CrossRefGoogle Scholar
  14. Bozeman, B., & Rogers, J. D. (2002). A churn model of scientific knowledge value: Internet researchers as a knowledge value collective. Research Policy, 31(5), 769–794.CrossRefGoogle Scholar
  15. Bozeman, B., & Sarewitz, D. (2011). Public value mapping and science policy evaluation. Minerva, 49(1), 1–23.CrossRefGoogle Scholar
  16. Bozeman, B., & Youtie, J. (2016). Trouble in paradise: Problems in academic research co-authoring. Science and Engineering Ethics, 22(6), 1717–1743.CrossRefGoogle Scholar
  17. Bozeman, B., & Youtie, J. (2017). The strength in numbers: The new science of team science. Princeton: Princeton University Press.Google Scholar
  18. Braxton, J. M. (1989). Institutional variability in faculty conformity to the norms of science: A force of integration or fragmentation in the academic profession? Research in Higher Education, 30, 419–433.CrossRefGoogle Scholar
  19. Broad, W. J. (1981). The publishing game: Getting more for less. Science, 211(4487), 1137–1139.MathSciNetCrossRefGoogle Scholar
  20. Budd, J. M., & Stewart, K. N. (2015). Is there such a thing as “Least Publishable Unit”? An empirical investigation. LIBRES: Library and Information Science Research Electronic Journal, 25(2), 78.Google Scholar
  21. Buddemeier, R. W. (1981). Least publishable unit. Science, 212(4494), 494.CrossRefGoogle Scholar
  22. Butler, L. (2003). Explaining Australia’s increased share of ISI publications—The effects of a funding formula based on publication counts. Research Policy, 32(1), 143–155.CrossRefGoogle Scholar
  23. Cabbolet, M. J. T. F. (2016). The least interesting unit: A new concept for enhancing one’s academic career opportunities. Science and Engineering Ethics, 22(6), 1837–1841.CrossRefGoogle Scholar
  24. Carnegie Classification of Institutions of Higher Education. (2015). About carnegie classification. http://carnegieclassifications.iu.edu/. Accessed February 19 2018.
  25. Cheung, W. W. (2008). The economics of post-doc publishing. Ethics in Science and Environmental Politics, 8(1), 41–44.CrossRefGoogle Scholar
  26. Cooper, M. H. (2009). Commercialization of the university and problem choice by academic biological scientists. Science, Technology and Human Values, 34(5), 629–653.CrossRefGoogle Scholar
  27. Costas, R., & Bordons, M. (2007). The H-index: Advantages, limitations and its relation with other bibliometric indicators at the micro level. Journal of informetrics, 1(3), 193–203.CrossRefGoogle Scholar
  28. Costas, R., & Bordons, M. (2011). Do age and professional rank influence the order of authorship in scientific publications? Some evidence from a micro-level perspective. Scientometrics, 88(1), 145–161.CrossRefGoogle Scholar
  29. Day, N. E. (2011). The silent majority: Manuscript rejection and its impact on scholars. Academy of Management Learning & Education, 10(4), 704–718.CrossRefGoogle Scholar
  30. de Solla Price, D. J. (1971). The expansion of scientific knowledge. Annals of the New York Academy of Sciences, 184(1), 257–259.CrossRefGoogle Scholar
  31. Debackere, K., & Rappa, M. A. (1994). Institutional variations in problem choice and persistence among scientists in an emerging field. Research Policy, 23(4), 425–441.CrossRefGoogle Scholar
  32. Driscoll, L. G., Parkes, K. A., Tilley-Lubbs, G. A., Brill, J. M., & Pitts Bannister, V. R. (2009). Navigating the lonely sea: Peer mentoring and collaboration among aspiring women scholars. Mentoring & Tutoring: Partnership in Learning, 17(1), 5–21.CrossRefGoogle Scholar
  33. Ductor, L. (2015). Does co-authorship lead to higher academic productivity? Oxford Bulletin of Economics and Statistics, 77(3), 385–407.CrossRefGoogle Scholar
  34. Etzkowitz, H. (1998). The norms of entrepreneurial science: Cognitive effects of the new university–industry linkages. Research Policy, 27(8), 823–833.CrossRefGoogle Scholar
  35. Fabrizio, K. R., & Di Minin, A. (2008). Commercializing the laboratory: Faculty patenting and the open science environment. Research Policy, 37(5), 914–931.CrossRefGoogle Scholar
  36. Feller, I. (2009). Performance measurement and the governance of American academic science. Minerva, 47(3), 323–344.CrossRefGoogle Scholar
  37. Felt, U., Igelsböck, J., Schikowitz, A., & Völker, T. (2016). Transdisciplinary sustainability research in practice between imaginaries of collective experimentation and entrenched academic value orders. Science, Technology and Human Values, 41(4), 732–761.CrossRefGoogle Scholar
  38. Fox, M. F. (2005). Gender, family characteristics, and publication productivity among scientists. Social Studies of Science, 35(1), 131–150.CrossRefGoogle Scholar
  39. Garousi, V., & Fernandes, J. M. (2017). Quantity versus impact of software engineering papers: A quantitative study. Scientometrics, 112(2), 963–1006.CrossRefGoogle Scholar
  40. Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: An international comparison. Minerva, 41(4), 277–304.CrossRefGoogle Scholar
  41. Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P., & Trow, M. (1994). The new production of knowledge: The dynamics of science and research in contemporary societies. London: Sage.Google Scholar
  42. Gilbert, G. (1978). Measuring the growth of science: A review of indicators of scientific growth. Scientometrics, 1(1), 9–34.CrossRefGoogle Scholar
  43. Gingras, Y., Larivière, V., Macaluso, B., & Robitaille, J.-P. (2009). The effects of aging on researchers’ publication and citation patterns. PLoS ONE, 3(12), e4048.  https://doi.org/10.1371/journal.pone.0004048.CrossRefGoogle Scholar
  44. Gleeson, M., & Biddle, S. (2000). Editorial duplicate publishing and the least publishable unit. Journal of Sports Sciences, 18(4), 227–228.CrossRefGoogle Scholar
  45. Glenna, L. L., Welsh, R., Ervin, D., Lacy, W. B., & Biscotti, D. (2011). Commercial science, scientists’ values, and university biotechnology research agendas. Research Policy, 40(7), 957–968.CrossRefGoogle Scholar
  46. Gonzalez-Brambila, C., & Veloso, F. M. (2007). The determinants of research output and impact: A study of Mexican researchers. Research Policy, 36(7), 1035–1051.CrossRefGoogle Scholar
  47. Groenewegen, P. (2002). Accommodating science to external demands: The emergence of Dutch toxicology. Science, Technology and Human Values, 27(4), 479–498.CrossRefGoogle Scholar
  48. Gupta, B. M., Sharma, P., & Karisiddappa, C. R. (1997). Growth of research literature in scientific specialities. A modelling perspective. Scientometrics, 40(3), 507–528.CrossRefGoogle Scholar
  49. Haslam, N., & Laham, S. M. (2010). Quality, quantity, and impact in academic publication. European Journal of Social Psychology, 40(2), 216–220.Google Scholar
  50. He, Z. L., Geng, X. S., & Campbell-Hunt, C. (2009). Research collaboration and research output: A longitudinal study of 65 biomedical scientists in a New Zealand university. Research Policy, 38(2), 306–317.CrossRefGoogle Scholar
  51. Heinze, T., Shapira, P., Rogers, J. D., & Senker, J. M. (2009). Organizational and institutional influences on creativity in scientific research. Research Policy, 38(4), 610–623.CrossRefGoogle Scholar
  52. Hicks, D. (2012). Performance-based university research funding systems. Research Policy, 41(2), 251–261.CrossRefGoogle Scholar
  53. Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). The Leiden Manifesto for research metrics. Nature, 520(7548), 429.CrossRefGoogle Scholar
  54. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102(46), 16569.CrossRefMATHGoogle Scholar
  55. Hottenrott, H., & Lawson, C. (2014). Research grants, sources of ideas and the effects on academic research. Economics of Innovation and New Technology, 23(2), 109–133.CrossRefGoogle Scholar
  56. Huth, E. J. (1986). Irresponsible authorship and wasteful publication. Annals of Internal Medicine, 104(2), 257–259.CrossRefGoogle Scholar
  57. Ioannidis, J. P., Boyack, K. W., & Klavans, R. (2014). Estimates of the continuously publishing core in the scientific workforce. PLoS ONE, 9(7), e101698.CrossRefGoogle Scholar
  58. Jabbehdari, S., & Walsh, J. P. (2017). Authorship norms and project structures in science. Science, Technology and Human Values, 42(5), 872–900.CrossRefGoogle Scholar
  59. Katz, J. S., & Martin, B. R. (1997). What is research collaboration? Research Policy, 26(1), 1–18.CrossRefGoogle Scholar
  60. Kaufmann, A., & Kasztler, A. (2009). Differences in publication and dissemination practices between disciplinary and transdisciplinary science and the consequences for research evaluation. Science and Public Policy, 36(3), 215–227.CrossRefGoogle Scholar
  61. Krimsky, S., Ennis, J. G., & Weissman, R. (1991). Academic–corporate ties in biotechnology: A quantitative study. Science, Technology and Human Values, 16(3), 275–287.CrossRefGoogle Scholar
  62. Larivière, V., & Costas, R. (2016). How many is too many? On the relationship between research productivity and impact. PLoS ONE, 11(9), e0162709.CrossRefGoogle Scholar
  63. Larivière, V., Desrochers, N., Macaluso, B., Mongeon, P., Paul-Hus, A., & Sugimoto, C. R. (2016). Contributorship and division of labor in knowledge production. Social Studies of Science, 46(3), 417–435.CrossRefGoogle Scholar
  64. Lee, S., & Bozeman, B. (2005). The impact of research collaboration on scientific productivity. Social Studies of Science, 35(5), 673–702.CrossRefGoogle Scholar
  65. Levin, S. G., & Stephan, P. E. (1991). Research productivity over the life cycle: Evidence for academic scientists. The American Economic Review, 81(1), 114–132.Google Scholar
  66. Lillquist, E., & Green, S. (2010). The discipline dependence of citation statistics. Scientometrics, 84(3), 749–762.CrossRefGoogle Scholar
  67. Lovakov, A., & Pislyakov, V. (2017). Authors’ publication strategies and citation distributions in journals. In Proceedings of the 16th international conference on scientometrics & informetrics (pp. 1489–1495). International Society for Scientometrics and Infometrics.Google Scholar
  68. Luukkonen, T., & Thomas, D. A. (2016). The ‘Negotiated Space’ of University Researchers’ pursuit of a research agenda. Minerva, 54(1), 99–127.CrossRefGoogle Scholar
  69. Magerman, T., Van Looy, B., & Debackere, K. (2015). Does involvement in patenting jeopardize one’s academic footprint? An analysis of patent-paper pairs in biotechnology. Research Policy, 44(9), 1702–1713.CrossRefGoogle Scholar
  70. Magyar, G. (1975). Typology of research in physics. Social Studies of Science, 5(1), 79–85.CrossRefGoogle Scholar
  71. Melin, G., & Persson, O. (1996). Studying research collaboration using co-authorships. Scientometrics, 36(3), 363–377.CrossRefGoogle Scholar
  72. Merton, R. K. (1968). The Matthew effect in science: The reward and communication systems of science are considered. Science, 159(3810), 56–63.CrossRefGoogle Scholar
  73. Meyer, M. (2006). Are patenting scientists the better scholars? An exploratory comparison of inventor–authors with their non-inventing peers in nano-science and technology. Research Policy, 35(10), 1646–1662.CrossRefGoogle Scholar
  74. Miller, A. N., Taylor, S. G., & Bedeian, A. G. (2011). Publish or perish: Academic life as management faculty live it. Career Development International, 16(5), 422–445.CrossRefGoogle Scholar
  75. Münch, R., & Baier, C. (2012). Institutional struggles for recognition in the academic field: The case of university departments in German chemistry. Minerva, 50(1), 97–126.CrossRefGoogle Scholar
  76. National Research Council. (2005). Bridges to independence: Fostering the independence of new investigators in biomedical research. Washington, DC: National Academies Press.Google Scholar
  77. Nedeva, M., Boden, R., & Nugroho, Y. (2012). Rank and file: Managing individual performance in university research. Higher Education Policy, 25(3), 335–360.CrossRefGoogle Scholar
  78. Oni, T., Sciarrino, F., Adesso, G., & Knight, R. (2016). Let researchers try new paths. Nature, 538(7626), 451–453.CrossRefGoogle Scholar
  79. Owen, W. J. (2004). In defense of the least publishable unit. Chronicle of Higher Education, 50(23), C1–C4.MathSciNetGoogle Scholar
  80. Packer, K., & Webster, A. (1996). Patenting culture in science: Reinventing the scientific wheel of credibility. Science, Technology and Human Values, 21(4), 427–453.CrossRefGoogle Scholar
  81. Podlubny, I. (2005). Comparison of scientific impact expressed by the number of citations in different fields of science. Scientometrics, 64(1), 95–99.CrossRefGoogle Scholar
  82. Polanyi, M. (2000). The republic of science: Its political and economic theory. Minerva, 38(1), 1–21.CrossRefGoogle Scholar
  83. Rawat, S., & Meena, S. (2014). Publish or perish: Where are we heading? Journal of Research in Medical Sciences, 19(2), 87–89.Google Scholar
  84. Refinetti, R. (1990). In defense of the least publishable unit. The FASEB Journal, 4(1), 128–129.CrossRefGoogle Scholar
  85. Resnik, D. B. (2006). The price of truth: How money affects the norms of science. Oxford: Oxford University Press.Google Scholar
  86. Richards, R. J., & Daston, L. (Eds.). (2016). Kuhn’s’ structure of scientific revolutions’ at fifty: Reflections on a science classic. Chicago: University of Chicago Press.Google Scholar
  87. Salinas, S., & Munch, S. B. (2015). Where should I send it? Optimizing the submission decision process. PLoS ONE, 10(1), e0115451.CrossRefGoogle Scholar
  88. Sandström, U., & van den Besselaar, P. (2016). Quantity and/or quality? The importance of publishing many papers. PLoS ONE, 11(11), e0166149.CrossRefGoogle Scholar
  89. Sarrico, C. S., Rosa, M. J., Teixeira, P. N., & Cardoso, M. F. (2010). Assessing quality and evaluating performance in higher education: Worlds apart or complementary views? Minerva, 48(1), 35–54.CrossRefGoogle Scholar
  90. Slone, R. M. (1996). Coauthors’ contributions to major papers published in the AJR: Frequency of undeserved coauthorship. American Journal of Roentgenology, 167(3), 571–579.CrossRefGoogle Scholar
  91. van den Besselaar, P., Heyman, U., & Sandström, U. (2017). Perverse effects of output-based research funding? Butler’s Australian case revisited. Journal of Informetrics, 11(3), 905–918.CrossRefGoogle Scholar
  92. Van Raan, A. F. (2006). Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics, 67(3), 491–502.CrossRefGoogle Scholar
  93. Vinkler, P. (1997). Relations of relative scientometric impact indicators. The relative publication strategy index. Scientometrics, 40(1), 163–169.CrossRefGoogle Scholar
  94. Walsh, J. P., & Lee, Y. N. (2015). The bureaucratization of science. Research Policy, 44(8), 1584–1600.CrossRefGoogle Scholar
  95. Yan, E. (2016). Disciplinary knowledge production and diffusion in science. Journal of the Association for Information Science and Technology, 67(9), 2223–2245.CrossRefGoogle Scholar
  96. Ziman, J. M. (1987). The problem of “problem choice”. Minerva, 25(1), 92–106.CrossRefGoogle Scholar
  97. Ziman, J. (1994). Prometheus bound: Science in a dynamic steady state. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  98. Zuckerman, H. (1978). Theory choice and problem choice in science. Sociological Inquiry, 48(3–4), 65–95.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2018

Authors and Affiliations

  1. 1.Center for Organization Research and DesignArizona State UniversityPhoenixUSA

Personalised recommendations