Advertisement

Scientometrics

, Volume 85, Issue 2, pp 429–441 | Cite as

Scientometric indicators: peer-review, bibliometric methods and conflict of interests

  • Primož Južnič
  • Stojan Pečlin
  • Matjaž Žaucer
  • Tilen Mandelj
  • Miro Pušnik
  • Franci Demšar
Article

Abstract

The paper discusses the role of scientometric indicators in peer-review selection of research project proposals. An ex post facto evaluation was made of three calls for research project proposals in Slovenia: 2003 with a peer review system designed in a way that conflict of interest was not avoided effectively, 2005 with a sound international peer-review system with minimized conflict of interest influence but a limited number of reviewers, and 2008 with a combination of scientometric indicators and a sound international peer review with minimized conflict of interest influence. The hypothesis was that the three different peer review systems would have different correlations with the same set of scientometric indicators. In the last two decision-making systems (2005 and 2008) where conflict of interest was effectively avoided, we have a high percentage (65%) of projects that would have been selected in the call irrespective of the method (peer review or bibliometrics solely). In contrast, in the 2003 call there is a significantly smaller percentage (49%) of projects that would have been selected in the call irrespective of the method (peer review or bibliometrics solely). It was shown that while scientometric indicators can hardly replace the peer-review system as the ultimate decision-making and support system, they can reveal its weaknesses on one hand and on the other can verify peer-review scores and minimize conflict of interest if necessary.

Keywords

Scientometric indicators Research project proposals Ex post evaluation Peer review systems Conflict of interests 

References

  1. Aksnes, D. W., & Taxt, R. E. (2004). Peer reviews and bibliometric indicators: A comparative study at a Norwegian university. Research Evaluation, 13(1), 33–41.CrossRefGoogle Scholar
  2. Bornmann, L., & Daniel, H. D. (2006). Selecting scientific excellence through committee peer review—a citation analysis of publications previously published to approval or rejection of post-doctoral research fellowship applicants. Scientometrics, 68(3), 427–440.CrossRefGoogle Scholar
  3. Butler, L. (2002). Explaining Australia’s increased share of ISI publications—the effects of a funding formula based on publication counts. Research Policy, 32(1), 143–155.CrossRefGoogle Scholar
  4. Coccia, M. (2009). Research performance and bureaucracy within public research labs. Scientometrics, 79(1), 93–107.CrossRefGoogle Scholar
  5. Cole, S., Cole, J. R., & Simon, G. A. (1981). Chance and consensus in peer review. Science, 214, 881–886.CrossRefGoogle Scholar
  6. Glänzel, W., Debackere, K., Thijs, B., & Schubert, A. (2006). A concise review on the role of author self-citations in information science, bibliometrics and science policy. Scientometrics, 67(2), 263–277.CrossRefGoogle Scholar
  7. Haeffner-Cavaillon, N., & Graillot-Gak, C. (2009). The use of bibliometric indicators to help peer-review assessment. Archivum Immunologiae et therapiae Experimentalis, 57(1), 33–38.CrossRefGoogle Scholar
  8. Larsen, P. O. (2008). The state of the art in publication counting. Scientometrics, 77(2), 235–251.CrossRefGoogle Scholar
  9. Moed, H. F., van Leeuwen, T. N., & Reedijk, J. (1999). Towards appropriate indicators of journal impact. Scientometrics, 46(3), 575–589.CrossRefGoogle Scholar
  10. Oppenheim, C. (1997). The correlation between citation counts and the 1992 research assessment exercise ratings for British research in genetics, anatomy and archaeology. Journal of Documentation, 53(5), 477–487.CrossRefMathSciNetGoogle Scholar
  11. Reinhart, M. (2009). Peer review of grant applications in biology and medicine. Reliability, fairness, and validity. Scientometrics, 1–21. Accessed November 15, 2009, from http://www.springerlink.com.nukweb.nuk.uni-lj.si/content/h697768254544588/fulltext.pdf.
  12. Rigby, J. (2009). Comparing the scientific quality achieved by funding instruments for single grant holders and for collaborative networks within a research system: Some observations. Scientometrics, 78(1), 145–164.CrossRefGoogle Scholar
  13. Rinia, E. J., van Leewen, T. H. N., van Vuren, H. G., & van Raan, A. F. J. (1998). Comparative analysis of a set of bibliometric indicators and central peer review criteria: Evaluation of condensed matter physics in the Netherlands. Research Policy, 27(1), 95–107.CrossRefGoogle Scholar
  14. Rinia, E. J., van Leewen, T. H. N., van Vuren, H. G., & van Raan, A. F. J. (2001). Influence of interdisciplinarity on peer-review and bibliometric evaluations in physics research. Research Policy, 30(3), 357–361.CrossRefGoogle Scholar
  15. Sandström, U., & Hällsten, M. (2008). Persistent nepotism in peer review. Scientometrics, 74(2), 175–189.CrossRefGoogle Scholar
  16. Seglen, P. O. (1997). Why the impact factors of journals should not be used for evaluating research. British Medical Journal, 314(7079), 498–502.Google Scholar
  17. Slovenian Current Research Information System (SICRIS). (2009). Accessed October 25, 2009, from http://sicris.izum.si/about/cris.aspx?lang=eng.
  18. Sorčan, S., Demsar, F., & Valenci, T. (2008). Znanstveno raziskovanje v Sloveniji. Ljubljana: Javna Agencija za raziskovalno dejavnost Republike Slovenije.Google Scholar
  19. van Leeuwen, T. N., Visser, M. S., Moed, H. F., Nederhof, T. J., & van Raan, A. F. J. (2003). The Holy Grail of science policy: Exploring and combining bibliometric tools in search of scientific excellence. Scientometrics, 57(2), 257–280.CrossRefGoogle Scholar
  20. van Raan, A. F. J. (1996). Advanced bibliometric methods as quantitative core of peer review based evaluation and foresight exercises. Scientometrics, 36(3), 397–420.CrossRefGoogle Scholar
  21. van Raan, A. F. J. (2006). Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics, 67(3), 491–502.Google Scholar
  22. Vanclay, J. K. (2009). Bias in the journal impact factor. Scientometrics, 78(1), 2–12.CrossRefGoogle Scholar
  23. Warner, J. (2000). A critical review of the application of citation studies to the Research Assessment Exercises. Journal of Information Science, 26(6), 453–460.CrossRefGoogle Scholar
  24. Weingart, P. (2005). Impact of bibliometrics upon the science system: Inadvertent consequences? Scientometrics, 62(1), 117–131.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2010

Authors and Affiliations

  • Primož Južnič
    • 1
  • Stojan Pečlin
    • 2
  • Matjaž Žaucer
    • 3
  • Tilen Mandelj
    • 3
  • Miro Pušnik
    • 3
  • Franci Demšar
    • 2
  1. 1.Department of Library and Information Science and Book Studies, Faculty of ArtsUniversity of LjubljanaLjubljanaSlovenia
  2. 2.Slovenian Research Agency (ARRS)LjubljanaSlovenia
  3. 3.Central Technological Library at the University of LjubljanaLjubljanaSlovenia

Personalised recommendations