Advertisement

Scientometrics

, Volume 109, Issue 2, pp 697–722 | Cite as

Importance and susceptibility of scientific productivity indicators: two sides of the same coin

  • Alexandre Rodrigues de Oliveira
  • Carlos Fernando MelloEmail author
Article

Abstract

We investigated whether applicants or recipients of research productivity fellowships of the main research financing agency in Brazil (CNPq) would consider the most “important products and indicators” of scientific/academic activity those also considered the least susceptible. We hypothesized that perception of susceptibility and importance of productivity indicators would vary according to the fellowship level of the grantees. Seven hundred and two scientists, being 79 non-grantees and 623 recipients of research productivity fellowships in the area of biosciences participated in the study. The scientists were requested to score the importance of a series of indicators (i.e., total number of published articles, number of articles as first author, number of articles as last/corresponding author, H-index, books and others, totalizing 39 variables) using a Likert scale. After completing the evaluation of the symbolic importance of all indicators, the scientists scored the “susceptibility” of the same indicators. The most important products and indicators of productivity were also those considered the least susceptible. Local, national and international prizes, publications or grants were increasingly perceived as more important and less susceptible. Moreover, the symbolic magnitude of susceptibility and importance of the elements (indicators) of the curriculum varied according to the productivity fellowship level of the grantee and gender. Despite the observed differences, a consensus of the most important and least susceptible products and indicators could be established. Ultimate individual responsibility and international projection are common characteristics of the most important and least susceptible indicators of scientific productivity.

Keywords

Productivity indicators Importance Susceptibility Symbolic magnitude Grant review 

Notes

Acknowledgments

Special thanks are due to the CNPq for making possible and stimulating this study, which was supported by CNPq and CAPES. C.F. Mello is the recipient of a Research Productivity Fellowship of CNPq.

References

  1. Abramo, G., & D’Angelo, C. (2011). Evaluating research: from informed peer review to bibliometrics. Scientometrics, 87(3), 499–514.CrossRefGoogle Scholar
  2. Abramo, G., & D’Angelo, C. (2014). How do you define and measure research productivity? Scientometrics, 101(2), 1129–1144.CrossRefGoogle Scholar
  3. Abramo, G., D’Angelo, C., & Rosati, F. (2013). Measuring institutional research productivity for the life sciences: The importance of accounting for the order of authors in the byline. Scientometrics, 97(3), 779–795.CrossRefGoogle Scholar
  4. Abramo, G., D’Angelo, C., & Viel, F. (2010). Peer review research assessment: A sensitivity analysis of performance rankings to the share of research product evaluated. Scientometrics, 85(3), 705–720.CrossRefGoogle Scholar
  5. Antúnez, J. M., Navarro, J. F., & Adan, A. (2013). Circadian typology and emotional intelligence in healthy adults. Chronobiology International: The Journal of Biological & Medical Rhythm Research, 30(8), 981–987.CrossRefGoogle Scholar
  6. Bartneck, C., & Kokkelmans, S. (2011). Detecting h-index manipulation through self-citation analysis. Scientometrics, 87(1), 85–98.CrossRefGoogle Scholar
  7. Beasley, M. T., & Schumacker, R. E. (1995). Multiple regression approach to analyzing contingency tables: post hoc and planned comparison procedures. The Journal of Experimental Education, 64(1), 79–93.CrossRefGoogle Scholar
  8. Biscaro, C., & Giupponi, C. (2014). Co-authorship and bibliographic coupling network effects on citations. PLoS ONE, 9(6), e99502. doi: 10.1371/journal.pone.0099502.CrossRefGoogle Scholar
  9. Bordons, M., Fernández, M. T., & Gómez, I. (2002). Advantages and limitations in the use of impact factor measures for the assessment of research performance. Scientometrics, 53(2), 195–206.CrossRefGoogle Scholar
  10. Bornmann, L., & Daniel, H. (2005). Selection of research fellowship recipients by committee peer review. Reliability, fairness and predictive validity of Board of Trustees’ decisions. Scientometrics, 63(2), 297–320.CrossRefGoogle Scholar
  11. Bornmann, L., & Daniel, H. (2006). Selecting scientific excellence through committee peer review—A citation analysis of publications previously published to approval or rejection of post-doctoral research fellowship applicants. Scientometrics, 68(3), 427–440.CrossRefGoogle Scholar
  12. Carillo, M. R., Papagni, E., & Sapio, A. (2013). Do collaborations enhance the high-quality output of scientific institutions? Evidence from the Italian Research Assessment Exercise. Journal of Socio-Economics, 47, 25–36.CrossRefGoogle Scholar
  13. Clark, B. Y., & Llorens, J. J. (2012). Investments in scientific research: examining the funding threshold effects on scientific collaboration and variation by academic discipline. (Report). Policy Studies Journal, 40(4), 698.CrossRefGoogle Scholar
  14. Claveria, L., Guallar, E., Cami, J., Conde, J., Pastor, R., Ricoy, J., et al. (2000). Does peer review predict the performance of research projects in health sciences? Scientometrics, 47(1), 11–23.CrossRefGoogle Scholar
  15. Costas, R., & Bordons, M. (2011). Do age and professional rank influence the order of authorship in scientific publications? Some evidence from a micro-level perspective. Scientometrics, 88(1), 145–161.CrossRefGoogle Scholar
  16. Doja, A., Eady, K., Horsley, T., Bould, M. D., Victor, J. C., & Sampson, M. (2014). The h-index in medical education: an analysis of medical education journal editorial boards. BMC Medical Education, 14, 251.CrossRefGoogle Scholar
  17. Drakopoulou-Dodd, S., McDonald, S., McElwee, G., & Smith, R. (2014). A Bourdieuan analysis of qualitative authorship in entrepreneurship scholarship. Journal of Small Business Management, 52(4), 633–654.CrossRefGoogle Scholar
  18. Fedderke, J. W., & Goldschmidt, M. (2015). Does massive funding support of researchers work? Evaluating the impact of the South African research chair funding initiative. Research Policy, 44(2), 467–482.CrossRefGoogle Scholar
  19. Garcia, C., & Sanz-Menendez, L. (2005). Competition for funding as an indicator of research competitiveness. Scientometrics, 64(3), 271–300.CrossRefGoogle Scholar
  20. Geuna, A., & Martin, B. R. (2003). University research evaluation and funding: An international comparison. Minerva, 41(4), 277–304.CrossRefGoogle Scholar
  21. Glänzel, W., Leta, J., & Thijs, B. (2006). Science in Brazil. Part 1: A macro-level comparative study. Scientometrics, 67(1), 67–86. doi: 10.1007/s11192-006-0055-7.CrossRefGoogle Scholar
  22. Greenland, P., & Fontanarosa, P. B. (2012). Ending honorary authorship. Science, 337(6098), 1019.CrossRefGoogle Scholar
  23. Gregori Júnior, F., de Godoy, M. F., & Gregori, F. F. (2012). Proposta de um índice cientométrico individual, com ênfase na ponderação positiva da participação do primeiro autor: índice h-fac. Revista Brasileira de Cirurgia Cardiovascular, 27, 370–376.Google Scholar
  24. Haeffner-Cavaillon, N., & Graillot-Gak, C. (2009). The use of bibliometric indicators to help peer-review assessment. Archivum Immunologiae et Therapiae Experimentalis (Warsz), 57(1), 33–38.CrossRefGoogle Scholar
  25. Hicks, D. (2009). Evolving regimes of multi-university research evaluation. Higher Education, 57(4), 393–404.CrossRefGoogle Scholar
  26. Horrobin, D. F. (1990). The philosophical basis of peer review and the suppression of innovation. JAMA, 263(10), 1438–1441.CrossRefGoogle Scholar
  27. Juznic, P., Peclin, S., Zaucer, M., Mandelj, T., Pusnik, M., & Demsar, F. (2010). Scientometric indicators: Peer-review, bibliometric methods and conflict of interests. Scientometrics, 85(2), 429–441.CrossRefGoogle Scholar
  28. Lee, M., Om, K., & Koh, J. (2000). The bias of sighted reviewers in research proposal evaluation: A comparative analysis of blind and open review in Korea. Scientometrics, 48(1), 99–116.CrossRefGoogle Scholar
  29. Lucas, E., & Garcia-Zorita, J. (2014). Produção Científica sobre Capital Social:estudo por acoplamento bibliográfico. Em Questão, 20(3), 27–42.Google Scholar
  30. Marino, I. R. (2008). Working toward meritocracy in Italy. Science, 320(5881), 1289.CrossRefGoogle Scholar
  31. Marino, I. R. (2012). A step backward for Italy’s meritocracy. Science, 336(6081), 541.CrossRefGoogle Scholar
  32. Ni, C. Q., Shaw, D., Lind, S. M., & Ding, Y. (2013). Journal impact and proximity: An assessment using bibliographic features. Journal of the American Society for Information Science and Technology, 64(4), 802–817.CrossRefGoogle Scholar
  33. Oliveira, A. R., & Mello, C. F. (2014). Indicadores para a avaliação da produtividade em pesquisa: A opinião dos pesquisadores que concorrem a bolsas do CNPq na área de Biociências. Revista Brasileira de Pós-Graduação - RBPG, 11(25), 657–678.Google Scholar
  34. Pendlebury, D. A. (2009). The use and misuse of journal metrics and other citation indicators. Archivum immunolgiae et therapiae experimentalis, 57(1), 1–11.CrossRefGoogle Scholar
  35. Piwowar, H. (2013). Value all research products: a new funding policy by the US National Science Foundation represents a sea-change in how researchers are evaluated. (COMMENT). Nature, 493(7431), 159.Google Scholar
  36. Reinhart, M. (2009). Peer review of grant applications in biology and medicine. Reliability, fairness, and validity. Scientometrics, 81(3), 789–809.CrossRefGoogle Scholar
  37. Retzer, V., & Jurasinski, G. (2009). Towards objectivity in research evaluation using bibliometric indicators—A protocol for incorporating complexity. Basic and Applied Ecology, 10(5), 393–400.CrossRefGoogle Scholar
  38. Roos, D. H., Calabro, L., Jesus, S. L., Souza, D. O., Barbosa, N. V., & Rocha, J. B. T. (2014). Brazilian scientific production in areas of biological sciences: A comparative study on the modalities of full doctorate in Brazil or abroad. Scientometrics, 98(1), 415.CrossRefGoogle Scholar
  39. Sandstrom, U., & Hallsten, M. (2008). Persistent nepotism in peer-review. Scientometrics, 74(2), 175–189.CrossRefGoogle Scholar
  40. Stallings, J., Vance, E., Yang, J., Vannier, M. W., Liang, J., Pang, L., et al. (2013). Determining scientific impact using a collaboration index. Proceedings of the National Academy of Sciences of the United States of America, 110(24), 9680–9685.MathSciNetCrossRefzbMATHGoogle Scholar
  41. Tao, T., Bo, L., Wang, F., Li, J., & Deng, X. (2012). Equal contributions and credit given to authors in anesthesiology journals during a 10-year period. Scientometrics, 91(3), 1005–1010.CrossRefGoogle Scholar
  42. Taylor, S. E., & Lobel, M. (1989). Social-comparison activity under threat—Downward evaluation and upward contacts. Psychological Review, 96(4), 569–575.CrossRefGoogle Scholar
  43. Thiry-Cherques, H. R. (2006). Pierre Bourdieu: a teoria na prática. Revista de Administração Pública, 40, 27–53.CrossRefGoogle Scholar
  44. Walters, G. (2016). Adding authorship order to the quantity and quality dimensions of scholarly productivity: evidence from group- and individual-level analyses. Scientometrics, 106(2), 769–785.MathSciNetCrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2016

Authors and Affiliations

  1. 1.Science Education Graduate ProgramFederal University of Santa Maria (UFSM)Santa MariaBrazil
  2. 2.Department of Physiology and PharmacologyFederal University of Santa Maria (UFSM)Santa MariaBrazil

Personalised recommendations