, Volume 81, Issue 3, pp 789–809 | Cite as

Peer review of grant applications in biology and medicine. Reliability, fairness, and validity

  • Martin ReinhartEmail author


This paper examines the peer review procedure of a national science funding organization (Swiss National Science Foundation) by means of the three most frequently studied criteria reliability, fairness, and validity. The analyzed data consists of 496 applications for project-based funding from biology and medicine from the year 1998. Overall reliability is found to be fair with an intraclass correlation coefficient of 0.41 with sizeable differences between biology (0.45) and medicine (0.20). Multiple logistic regression models reveal only scientific performance indicators as significant predictors of the funding decision while all potential sources of bias (gender, age, nationality, and academic status of the applicant, requested amount of funding, and institutional surrounding) are non-significant predictors. Bibliometric analysis provides evidence that the decisions of a public funding organization for basic project-based research are in line with the future publication success of applicants. The paper also argues for an expansion of approaches and methodologies in peer review research by increasingly focusing on process rather than outcome and by including a more diverse set of methods e.g. content analysis. Such an expansion will be necessary to advance peer review research beyond the abundantly treated questions of reliability, fairness, and validity.


Swiss National Science Foundation Funding Decision Grant Application Funding Organization Disciplinary Difference 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Armstrong, P. W., Caverson, M. M., Adams, L., Taylor, M., Olley, P. M. (1997), Evaluation of the heart and stroke foundation of canada research scholarship program: Research productivity and impact, Canadian Journal of Cardiology, 13(5): 507–516.Google Scholar
  2. Bakanic, V., Mcphail, C., Simon, R. J. (1987), The manuscript review and decision-making process, American Sociological Review, 52(5): 631–642.CrossRefGoogle Scholar
  3. Bornmann, L., Daniel, H. D. (2003), Begutachtung durch Fachkollegen in der Wissenschaft. Stand der Forschung zur Reliabilität, Fairness und Validität des Peer-Review-Verfahrens. In: Schwarz, S., Teichler, U. (Eds), Universität auf dem Prüfstand. Konzepte und Befunde der Hochschulforschung. Campus, Frankfurt am Main, pp. 211–230.Google Scholar
  4. Bornmann, L., Daniel, H. D. (2005), Selection of research fellowship recipients by committee peer review. Reliability, fairness and predictive validity of Board of Trustees’ decisions, Scientometrics, 63(2): 297–320.CrossRefGoogle Scholar
  5. Bornstein, R. F. (1991), The predictive validity of peer review: A neglected issue, Behavioral and Brain Sciences, 14(1): 138–9.Google Scholar
  6. Brody, T., Carr, L., Gingras, Y., Hajjem, C., Harnad, S., Swan, A. (2007), Incentivizing the open access research web: Publication-archiving, data-archiving and scientometrics, CTWatch Quarterly, 3(3): 17–18.Google Scholar
  7. Carter, G. M. (1978), The Consequences of Unfunded NIH Applications for the Investigator and His Research, Rand Corporation, Santa Monica.Google Scholar
  8. Carter, G. M. (1974), Peer Review, Citations, and Biomedical Research Policy: NIH Grants to Medical School Faculty, Rand Corporation, Santa Monica.Google Scholar
  9. Carter, G. M. (1978), A Citation Study of the NIH Peer Review Process, Rand Corporation, Santa Monica.Google Scholar
  10. Chapman, G. B., Mccauley, C. (1994), Predictive validity of quality ratings of National Science Foundation graduate fellows, Educational and Psychological Measurement, 54(2): 428–438.CrossRefGoogle Scholar
  11. Cicchetti, D. V. (1991A), The reliability of peer review for manuscript and grant submissions: A crossdisciplinary investigation, Behavioral and Brain Sciences, 14(1): 119–135.Google Scholar
  12. Cicchetti, D. V. (1991B), Reflections from the peer-review mirror, Behavioral and Brain Sciences, 14: 167–186.Google Scholar
  13. Cicchetti, D. V., Sparrow, S. A. (1981), Developing criteria for establishing interrater reliability of specific items: Applications to assessment of adaptive behavior. American Journal of Mental Deficiency, 86(2): 127–137.Google Scholar
  14. Claveria, L. E., Guallar, E., Cami, J., Conde, J., Pastor, R., Ricoy, J. R., Rodriguez, E., Ruizpalomo, F., Munoz, E. (2000), Does peer review predict the performance of research projects in health sciences? Scientometrics, 47(1): 11–23.CrossRefGoogle Scholar
  15. Cole, J. R., Cole, S. (1981), Peer Review in the National Science Foundation: Phase Two of a Study, National Academy Press, Washington D.C.Google Scholar
  16. Cole, S., Fiorentine, R. (1991), Discrimination against women in science: The confusion of outcome with process, In: H. Zuckerman, J. R. Cole, J. T. Bruer (Eds), The Outer Circle: Women in the Scientific Community, Yale University Press, 205–226.Google Scholar
  17. Cole, S., Rubin, L., Cole, J. R. (1978), Peer Review in the National Science Foundation: Phase One of a Study, National Academy Press, Washington D.C.Google Scholar
  18. Daniel, H. D. (1993), Guardians of Science, VCH, New York.CrossRefGoogle Scholar
  19. Demicheli, V., Pietrantonj, C. (2004), Peer review for improving the quality of grant applications, The Cochrane Library, 4.Google Scholar
  20. Deutsche Forschungsgemeinschaft (2007), Jahresbericht 2006, Bonn.Google Scholar
  21. Dirk, L. (1999), A measure of originality: The elements of science, Social Studies of Science, 29(5): 765–776.CrossRefGoogle Scholar
  22. Garfield, E. (1979), Citation IndexingIts Theory and Application in Science, Technology, and Humanities, John Wiley and Sons, New York.Google Scholar
  23. Guetzkow, J., Lamont, M., Mallard, G. (2004), What is originality in the humanities and the social sciences? American Sociological Review, 69(2): 190–212.CrossRefGoogle Scholar
  24. Harnad, S. (1983), Peer Commentary on Peer Review: A Case Study in Scientific Quality Control. Cambridge University Press.Google Scholar
  25. Harnad, S. (1985), Rational disagreement in peer review, Science, Technology, & Human Values, 10(3): 55–62.CrossRefMathSciNetGoogle Scholar
  26. Hartmann, I. (1990), Begutachtung in der Forschungsförderung — Die Argumente der Gutachter in der deutschen Forschungsgemeinschaft. Rita G. Fischer, Frankfurt am Main.Google Scholar
  27. Hartmann, I., Neidhardt, F. (1990), Peer review at the Deutsche Forschungsgemeinschaft, Scientometrics, 19(5): 419–425.CrossRefGoogle Scholar
  28. Hemlin, S. (1993), Scientific quality in the eyes of the scientist. A questionnaire study, Scientometrics, 27(1): 3–18.CrossRefGoogle Scholar
  29. Hirschauer, S. (2004), Peer Review Verfahren auf dem Prüfstand: Zum Soziologiedefizit der Wissenschaftsevaluation, Zeitschrift für Soziologie, 33(1): 62–83.Google Scholar
  30. Hoffmann, H., Joye, D., Kuhn, F., Metral, G. (2002), Der SNF im Spiegel der Forschenden. SIDOS, Neuchâtel.Google Scholar
  31. Hosmer, D. W., Lemeshow, S. (2000), Applied Logistic Regression, John Wiley and sons, New York.CrossRefzbMATHGoogle Scholar
  32. Howard, L., Wilkinson, G. (1999), Peer review and editorial decision-making, Neuroendocrinology Letters, 20(5): 256–260.Google Scholar
  33. Kalthoff, H. (1999), Die Herstellung von Gewissheit: Firmenkredite und Risikoanalyse in Mitteleuropa. Frankfurter Institut für Transformationsstudien, Europa-Universität Viadrina.Google Scholar
  34. Kelle, U., Prein, G., Bird, K. (1995), Computer-Aided Qualitative Data Analysis: Theory, Methods and Practice, Sage Publications.Google Scholar
  35. Lakatos, I. (1970), Falsification and the methodology of scientific research programmes, Criticism and the Growth of Knowledge, 4: 91–195.MathSciNetGoogle Scholar
  36. Langfeldt, L. (2001), The decision-making constraints and processes of grant peer review, and their effects on the review outcome, Social Studies of Science, 31(6): 820–841.CrossRefGoogle Scholar
  37. Longino, H. E. (2002), The Fate of Knowledge, Princeton University Press.Google Scholar
  38. Lonkila, M. (1995), Grounded theory as an emerging paradigm for CAQDAS. In: U. Kelle, G. Prein, K. Bird (Eds), Computer-Aided Qualitative Data Analysis. London: Sage.Google Scholar
  39. Merton, R. K. (1973), The normative structure of science. In: The Sociology of Science, University of Chicago Press, pp. 267–279.Google Scholar
  40. National Science Foundation (n.d.), US NSF — Budget, Retrieved August 6, 2008, from
  41. Neidhardt, F. (1988), Selbststeuerung in der Forschungsförderung: Das Gutachterwesen der DFG, Westdeutscher Verlag.Google Scholar
  42. Oppenheim, C. (1996), Do citations count? Citation indexing and the Research Assessment Exercise (RAE), Serials: The Journal for the Serials Community, 9(2): 155–161.CrossRefGoogle Scholar
  43. Peters, D. P., Ceci, S. J. (1982), Peer-review practices of psychological journals: The fate of published articles, submitted again, Behavioral and Brain Sciences, 5(2): 187–195.CrossRefGoogle Scholar
  44. Reinhart, M. (forthcoming), Peer review and quality criteria in science funding: access point versus boundary organization.Google Scholar
  45. Reinhart, M. (2006), Peer Review, Retrieved June 26, 2008, from
  46. Reinahrt, M., Sirtes, D. (2006), Wieviel Intransparenz ist für Entscheidungen über exzellente Wissenschaft notwendig? IfQ Working Paper, 1: 27–36.Google Scholar
  47. Schweizerischer Nationalfonds. (2002), Stiftungsurkunde / Statuten, Retrieved August 14, 2007, from
  48. Schweizerischer Nationalfonds. (2007), Jahresbericht 2006, Bern.Google Scholar
  49. Smith, A., Eysenck, M. (2002), The Correlation Between Rae Ratings and Citation Counts In Psychology, Department of Psychology, Royal Holloway, University of London, UK.Google Scholar
  50. Solomon, M. (2001), Social Empiricism, MIT Press.Google Scholar
  51. Stricker, L. J. (1991), Disagreement among journal reviewers: No cause for undue alarm, Behavioral and Brain Sciences, 14: 163–164.Google Scholar
  52. Strulik, T. (2007), Evaluationen in der Wirtschaft — Rating-Agenturen und das Management des Beobachtetwerdens, Leviathan Sonderheft, 24: 288–314.Google Scholar
  53. Weller, A. C. (2001), Editorial Peer Review: Its Strengths and Weaknesses, Information Today.Google Scholar
  54. Wood, F., Wessely, S. (1999), Peer review of grant applications: A systematic review. In: F. Godlee, T. Jefferson (Eds), Peer Review in Health Sciences. BMJ Books, London, 14–31.Google Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2009

Authors and Affiliations

  1. 1.Science Studies ProgramUniversity of BaselBaselSwitzerland

Personalised recommendations