Scientometrics

, Volume 102, Issue 2, pp 1167–1188 | Cite as

What difference does it make? Impact of peer-reviewed scholarships on scientific production

  • Adriana Bin
  • Sergio Salles-Filho
  • Luiza Maria Capanema
  • Fernando Antonio Basile Colugnati
Article

Abstract

We investigated the extent to which different selection mechanisms for awarding scholarships varied in their short- and longer-term consequences in the performance of awardees in terms of scientific production. We conducted an impact evaluation study on undergraduate, master’s, and PhD research scholarships and compared two different financial sources in Brazil: in one, the selection mechanism was based on a peer review system; the other was based on an institutional system other than peer review. Over 8,500 questionnaires were successfully completed, covering the period 1995–2009. The two groups were compared in terms of their scientific performance using a propensity score approach. We found that the peer-reviewed scholarship awardees showed better performance: they published more often and in journals with higher impact factors than scholarship awardees from the other group. However, two other results indicate a different situation. First, over the long-term, awardees under the peer review system continued to increase their publication rate and published in higher-quality journals; however, the differences with the control group tended to diminish after PhD graduation. Second, the better performance of peer-reviewed scholarships was not observed in all subject areas. The main policy implications of this study relate to a better understanding of selection mechanisms and the heterogeneity regarding the relation between selection processes and scientific and academic output.

Keywords

Scholarships Peer review Scientific production Impact Evaluation Propensity score 

Mathematical Subject Classification

62P25 

JEL Classification

O380 

References

  1. Abramo, G., D’Angelo, C. A., & Costa, F. (2010). Citations versus journal impact factor as proxy of quality: Could the latter ever be preferable? Scientometrics, 84, 821–833.CrossRefGoogle Scholar
  2. Asian Development Bank (ADB). (2007). Evaluation study: Japan scholarship program. Philippines: ABD.Google Scholar
  3. Amos, L. B., Windham, A. M., Reyes, I. B., Jones, W., & Baran, V. (2009). An impact evaluation of the gates millennium scholars program. Washington, DC: Bill & Melinda Gates Foundation.Google Scholar
  4. Arnold, E. (2012). Understanding long-term impacts of R&D funding: The EU framework programme. Research Evaluation, 21, 332–343.CrossRefGoogle Scholar
  5. Auriol, L., Misu, M., & Freeman, R. (2012). Doctoral graduates in times of economic downturn: labour market participation and mobility. Working Party of National Experts on Science and Technology Indicators. Directorate for Science, Technology and Industry-Committee for Scientific and Technological Policy: OECD.Google Scholar
  6. Bach, L. (2012). The frontiers of evaluation: Some considerations on the European case. Revista Brasileira de Inovação, 11, 67–84.Google Scholar
  7. Biotechnology and Biological Sciences Research Council (BBSRC). (2011). David Phillips fellowship scheme. Swindon: BBSRC.Google Scholar
  8. Bernal, J. D. (1954). Science in history. Cambridge: MIT Press.Google Scholar
  9. Böhmer, S., & von Ins, M. (2009). Different—Not just by label: Research-oriented academic careers in Germany. Research Evaluation, 18(3), 177–184.CrossRefGoogle Scholar
  10. Bornmann, L., Leydesdorff, L., & Besselaar, P. V. (2010). A meta-evaluation of scientific research proposals: Different ways of comparing rejected to awarded applications. Journal of Informetrics, 4, 211–220.CrossRefGoogle Scholar
  11. Brody, S. (2013). Impact factor: Imperfect but not yet replaceable. Scientometrics, 96, 255–257.CrossRefGoogle Scholar
  12. Burgoine, T., Hopkins, P., Rech, M. F., & Zapata, G. P. (2011). ‘These kids can’t write abstracts’: Reflections on a postgraduate writing and publishing workshop. Area, 43(4), 463–469.CrossRefGoogle Scholar
  13. Bush, V. (1945). Science, the endless frontier: A report to the president on a program for postwar scientific research. Washington: United States Government Printing Office.Google Scholar
  14. CIDA. (2005). Evaluation of the Canadian Francophonie Scholarship Program (CFSP), 1987–2005. Canada: CIDA.Google Scholar
  15. Colugnati, F. A. B., Silva, A. M. A. C., SallesFilho, S. L. M. (2011). Multidimensional evaluation of a program for early-career researcher in Brazil—the young investigator in emerging centers program. In Atlanta Conference on Science and Innovation Policy. Atlanta: IEEE Conference Publications.Google Scholar
  16. Edler, J., Georghiou, L., Blind, K., & Uyarra, E. (2012). Evaluating the demand side: New challenges for evaluation. Research Evaluation, 21, 33–47.CrossRefGoogle Scholar
  17. Feller, I. (2013). Peer review and expert panels as techniques for evaluating the quality of academic research. In A. N. Link & N. S. Vonortas (Eds.), Handbook on the Theory and Practice of Program Evaluation (pp. 115–142). Cheltenham: Edward Elgar.CrossRefGoogle Scholar
  18. Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29(5), 1189–1232.CrossRefMATHMathSciNetGoogle Scholar
  19. Garfield, E. (1999). Journal impact factor: A brief review. Journal of the Canadian Medical Association, 161(8), 979–980.Google Scholar
  20. Goldsmith, S. S., Presley, J. B., & Cooley, E. A. (2002). National Science Foundation Graduate Research Fellowship Program, Final Evaluation Report. Virginia: NSF.Google Scholar
  21. Halse, C., & Mowbray, S. (2011). The impact of the doctorate. Studies in Higher Education, 36(5), 513–525.CrossRefGoogle Scholar
  22. Heinze, T. (2008). How to sponsor ground-breaking research: a comparison of funding schemes. Science and Public Policy, 35(5), 302–318.CrossRefGoogle Scholar
  23. Hicks, D., & Melkers, J. (2013). Bibliometrics as a tool for research evaluation. In A. N. Link & N. S. Vonortas (Eds.), Handbook on the theory and practice of program evaluation (pp. 323–349). Cheltenham: Edward Elgar.CrossRefGoogle Scholar
  24. Jacob, B. A., & Lefgren, L. (2011). The impact of NIH postdoctoral training grants on scientific productivity. Research Policy, 40, 864–874.CrossRefGoogle Scholar
  25. Jarvey, P., Usher, A., & Mcelroy, L. (2012). Making research count: Analyzing canadian academic publishing cultures. Toronto: Higher Education Strategy Associates.Google Scholar
  26. Kamler, B. (2008). Rethinking doctoral publication practices: Writing from and beyond the thesis. Studies in Higher Education, 33(3), 283–294.CrossRefGoogle Scholar
  27. Kostoff, R., Averch, H., & Chubin, D. (1994). Research impact assessment: Introduction and overview. Evaluation Review, 18(1), 3–10.CrossRefGoogle Scholar
  28. Lee, H., Miozzo, M., & Laredo, P. (2010). Career patterns and competences of PhDs in science and engineering in the knowledge economy: The case of graduates from a UK research-based university. Research Policy, 39, 869–881.CrossRefGoogle Scholar
  29. Leydesdorff, L. (2012). Alternatives to the journal impact factor: I3 and the top-10% (or top-25%?) of the most-highly cited papers. Scientometrics, 92, 355–365.CrossRefGoogle Scholar
  30. Melin, G., & Danell, R. (2006). The top eight percent: Development of approved and rejected applicants for a prestigious grant in Sweden. Science and Public Policy, 33(10), 702–712.CrossRefGoogle Scholar
  31. Moral, F., & Pombo, N. (2011). Informe sociológico sobre la encuesta del valor social de las becas de la Fundación Carolina. Madrid: Fundación Carolina.Google Scholar
  32. Mowery, D. C., & Rosenberg, N. (1998). Paths of innovation: Technological change in 20th-century America. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  33. Mutz, R., & Daniel, H.-D. (2012). The generalized propensity score methodology for estimating unbiased journal impact factors. Scientometrics, 92, 377–390.CrossRefGoogle Scholar
  34. Navarra-Madsen, J., Bales, R. A., & Hynds, D. L. (2010). Role of scholarships in improving success rates of undergraduate science, technology, engineering and mathematics (STEM) majors. Procedia Social and Behavioral Sciences, 8, 458–464.CrossRefGoogle Scholar
  35. Nelder, J., & Wedderburn, R. (1972). Generalized linear models. Journal of the Royal Statistical Society, 135(3), 370–384.CrossRefGoogle Scholar
  36. Netting, F. E., & Nichols-Casebolt, A. (1997). Authorship and collaboration. Journal of Social Work Education, 33(3), 555–564.Google Scholar
  37. Neufeld, J., & von Ins, M. (2011). Informed peer review and uninformed bibliometrics? Research Evaluation, 20(5), 365–375.CrossRefGoogle Scholar
  38. Neumann, R., & Tan, K. K. (2011). From PhD to initial employment: The doctorate in a knowledge economy. Studies in Higher Education, 36(5), 601–614.CrossRefGoogle Scholar
  39. Opthof, T., & Leydesdorff, L. (2011). A comment to the paper by Waltman et al. Scientometrics, 87, 467–481.CrossRefGoogle Scholar
  40. Pavitt, K. (1991). What makes basic research economic useful? Research Policy, 20, 109–119.CrossRefGoogle Scholar
  41. Pinheiro, D., Melkers, J., & Youtie, J. (2012). Learning to play the game: Student publishing as an indicator of future scholarly success. Technological Forecasting and Social Change, 81, 56–66.CrossRefGoogle Scholar
  42. Price, D. J. S. (1963). Little science, big science. New York: Columbia University Press.Google Scholar
  43. Ridgeway, G. (1999). The state of boosting. Computing Science and Statistics, 31, 172–181.Google Scholar
  44. Ridgeway, G. (2006). Assessing the effect of race bias in post-traffic stop outcomes using propensity scores. Journal of Quantitative Criminology, 22(1), 1–29.CrossRefGoogle Scholar
  45. Ridgeway, G. (2013). Package GBM. http://cran.r-project.org/web/packages/gbm/gbm.pdf. Acessed 18 November 2013.
  46. Rigby, J. (2011). Systematic grant and funding body acknowledgement data for publications: New dimensions and new controversies for research policy and evaluation. Research Evaluation, 20(5), 365–375.CrossRefGoogle Scholar
  47. Rigby, J. (2013). Looking for the impact of peer review: Does count of funding acknowledgements really predict research impact? Scientometrics, 94, 57–73.CrossRefGoogle Scholar
  48. Roach, M., & Sauermann, H. (2010). A taste for science? PhD scientists’ academic orientation and self-selection into research careers in industry. Research Policy, 39, 422–434.CrossRefGoogle Scholar
  49. Rosenbaum, P., & Rubin, D. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1), 41–55.CrossRefMATHMathSciNetGoogle Scholar
  50. Salazar, H. J. (2010). Estudio sobre resultados e impactos de los programas de apoyo a la formación de posgrado en Colombia: hacia una agenda de evaluación de calidad. In L. Luchilo (Ed.), Formación de posgrado en América Latina : Políticas de apoyo resultados e impactos (pp. 117–176). Buenos Aires: Eudeba.Google Scholar
  51. Salter, A. J., & Martin, B. R. (2001). The economic benefits of publicly funded basic research: A critical review. Research Policy, 30, 509–532.CrossRefGoogle Scholar
  52. Schulz, P. A., & Manganote, E. J. T. (2012). Revisiting country research profiles: Learning about the scientific cultures. Scientometrics, 93, 517–531.CrossRefGoogle Scholar
  53. Statcom Estadísticos Consultores (STATCOM). (2007). Evaluación en profundidad—programas de becas de postgrado. Santiago de Chile: STATCOM.Google Scholar
  54. Thompson, D. F., Callen, E. C., & Nahata, M. C. (2009). New indices in scholarship assessment. American Journal of Pharmaceutical Education, 73(6), 1–5.CrossRefGoogle Scholar
  55. Van Raan, A. F. J. (1996). Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises. Scientometrics, 36, 397–420.CrossRefGoogle Scholar
  56. Van Raan, A. F. J. (2005). Measuring science. In H. F. Moed, W. Glänzel, & U. Schmoch (Eds.), Handbook of quantitative science and technology research: The use of publication and patent statistics in studies of S&T systems (pp. 19–50). EUA: Springer Science+Business Media.Google Scholar
  57. Vitae, (2010). What do researchers do? Doctoral graduate destinations and impact three years on 2010. RCUK: The Careers Research and Advisory Centre.Google Scholar
  58. Waltman, L., Van Eck, N. J., Van Leeuwen, T. N., Visser, M. S., & Van Raan, A. F. J. (2011). On the correlation between bibliometric indicators and peer review: Reply to Opthof and Leydesdorff. Scientometrics, 88, 1017–1022.CrossRefGoogle Scholar
  59. White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48, 817–830.CrossRefMATHMathSciNetGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2014

Authors and Affiliations

  • Adriana Bin
    • 1
  • Sergio Salles-Filho
    • 2
  • Luiza Maria Capanema
    • 3
  • Fernando Antonio Basile Colugnati
    • 4
  1. 1.School of Applied SciencesUniversity of CampinasLimeiraBrazil
  2. 2.Department of Science and Technology Policy, Institute of GeosciencesUniversity of CampinasCampinasBrazil
  3. 3.Agronomic Institute of CampinasCampinasBrazil
  4. 4.Medical SchoolFederal University of Juiz de ForaJuiz de ForaBrazil

Personalised recommendations