, Volume 93, Issue 1, pp 3–16 | Cite as

Research evaluation. Part I: productivity and citedness of a German medical research institution

  • A. Pudovkin
  • H. KretschmerEmail author
  • J. Stegmann
  • E. Garfield


An evaluation exercise was performed involving 313 papers of research staff (66 persons) of the Deutsche Rheuma-Forschungszentrum (DRFZ) published in 2004–2008. The records and citations to them were retrieved from the Web of Science (Thomson Reuters) in March 2010. The authors compared productivity and citedness of “group leaders” vs. “regular scientists”, of “male scientists” vs. “female scientists” using citation-based indexes. It was found that “group leaders” are more prolific and cited more often than “regular scientists”, the same is true considering “male” vs. “female scientists”. The greatest contrast is observed between “female leaders” and “female regular scientists”. The above mentioned differences are significant in indexes related to the number of papers, while values of indexes characterizing the quality of papers (average citation rate per paper and similar indexes) are not substantially different among the groups compared. The mean value of percentile rank index for all the 313 papers is 58.5, which is significantly higher than the global mean value of about 50. This fact is evidence of a higher citation status, on average, of the publications from the DRFZ.


Evaluation Productivity Citations Gender Group leaders Percentile rank index 

Mathematical Subject Classification

62 68 91 94 



Part of this work by one of the authors (Kretschmer, H.) was supported by the 7th framework program by the European Commission, SIS-2010- Project full title: “Academic Careers Understood through Measurement and Norms”, Project acronym: ACUMEN.


  1. Bar-Ilan, J. (2008). Informetrics at the beginning of the 21st century—a review. Journal of Informetrics, 2(1), 1–52.CrossRefGoogle Scholar
  2. Hirsch, J. E. (2005). An index to quantify an individual’s scientific research output. Proceedings of the National Academy of Sciences, 102(46), 16569–16572.CrossRefGoogle Scholar
  3. Ioannidis, J. P., Patsopoulos, N. A., Kavvoura, F. K. et al. (2007). International ranking systems for universities and institutions: a critical appraisal. BMC Medicine, 5(30).Google Scholar
  4. Pudovkin, A. I., & Garfield, E. (2009). Percentile rank and author superiority indexes for evaluating individual journal articles and the author’s overall citation performance. Collnet Journal of Scientometrics and Information Management, 3(2), 3–10.Google Scholar
  5. Sanz-Casado, E., Iribarren-Maestro, I., Garcia-Zorita, C. et al. (2009). Are productivity, impact and visibility indicators appropriate for measuring the quality of research conducted in universities? In B. Larsen and J. Leta (Eds.), Proceedings of ISSI 200912th International Conference of the International Society for Scientometrics and Informetrics (Vol. 1. pp. 286–290).Google Scholar
  6. Tijssen, R. J. W., van Leeuwen, T. N., & van Wijk, E. (2009). Benchmarking university-industry research cooperation worldwide: Performance measurements and indicators based on co-authorship data for the world’s largest universities. Research Evaluation, 18(1), 13–24.CrossRefGoogle Scholar
  7. Wallin, J. A. (2005). Bibliometric methods: Pitfalls and possibilities. Basic and Clinical Pharmacology and Toxicology, 97(5), 261–275.CrossRefGoogle Scholar

Copyright information

© Akadémiai Kiadó, Budapest, Hungary 2012

Authors and Affiliations

  • A. Pudovkin
    • 1
  • H. Kretschmer
    • 2
    Email author
  • J. Stegmann
    • 3
  • E. Garfield
    • 4
  1. 1.Institute of Marine Biology, Far East BranchRussian Academy of SciencesVladivostokRussia
  2. 2.Faculty of Business Administration/Business ComputingUniversity of Applied SciencesWildauGermany
  3. 3.BerlinGermany
  4. 4.Chairman EmeritusThomson Reuters Professional, Formerly Institute for Scientific Information® (ISI®)PhiladelphiaUSA

Personalised recommendations