Advertisement

Scientometrics

, Volume 76, Issue 1, pp 3–21 | Cite as

The evaluation of university departments and their scientists: Some general considerations with reference to exemplary bibliometric publication and citation analyses for a Department of psychology

  • Günter KrampenEmail author
Article

Abstract

In reference to an exemplary bibliometric publication and citation analysis for a University Department of Psychology, some general conceptual and methodological considerations on the evaluation of university departments and their scientists are presented. Data refer to publication and citation-by-others analyses (PsycINFO, PSYNDEX, SSCI, and SCI) for 36 professorial and non-professorial scientists from the tenure staff of the department under study, as well as confidential interviews on self-and colleagues-perceptions with seven of the sample under study. The results point at (1) skewed (Pareto-) distributions of all bibliometric variables demanding nonparametrical statistical analyses, (2) three personally identical outliers which must be excluded from some statistical analyses, (3) rather low rank-order correlations of publication and citation frequencies having approximately 15% common variance, (4) only weak interdependences of bibliometric variables with age, occupational experience, gender, academic status, and engagement in basic versus applied research, (5) the empirical appropriateness and utility of a normative typological model for the evaluation of scientists’ research productivity and impact, which is based on cross-classifications with reference to the number of publications and the frequency of citations by other authors, and (6) low interrater reliabilities and validity of ad hoc evaluations within the departments’ staff. Conclusions refer to the utility of bibliometric data for external peer reviewing and for feedback within scientific departments, in order to make colleague-perceptions more reliable and valid.

Keywords

Citation Analysis Publication Activity Social Science Citation Index Citation Frequency Academic Status 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Breznitz, S. (1967), Confidence estimation of group norm as a function of subjective conformity, Psychonomic Science, 7: 399–400.Google Scholar
  2. Campanario, J. M. (1998), Peer review for journals as it stands today — Part 1 and 2, Science Communication, 19: 181–211 and 277–306.CrossRefGoogle Scholar
  3. Cole, J., Cole, S. (1971), Measuring the quality of sociological research: Problems in the use of the Science Citation Index, American Sociologist, 6: 23–29.Google Scholar
  4. Daniel, H.-D. (1986), Die Vermessung der Forschung (The measurement of research). In: H. Methner (Ed.), Psychologie in Betrieb und Verwaltung (Psychology in Oganisations and Administrations). Deutscher Psychologen Verlag, Bonn (Germany), pp. 208–218.Google Scholar
  5. Endler, N. S., Rushton, J. P., Roediger, H. L. (1978), Productivity and scholary impact (citations) of British, Canadian, and U.S. Departments of Psychology (1975), American Psychologist, 33: 1064–1082.CrossRefGoogle Scholar
  6. Gray, P. H. (1983), Using science citation analysis to evaluate administrative accountability for salary variance, American Psychologist, 38: 116–117.CrossRefGoogle Scholar
  7. Gustafson, T. (1975), The controversy over peer review, Science, 190: 1060–1066.Google Scholar
  8. Kamenz, U., Wehrle, M. (2007). Professor Untat (Professor Not-work), Econ Verlag, Berlin, Germany.Google Scholar
  9. Krampen, G., Montada, L. (2002), Wissenschaftsforschung in der Psychologie (Science Research in Psychology), Hogrefe, Göttingen, Germany.Google Scholar
  10. Krampen, G., Becker, R., Wahner, U., Montada, L. (2007), On the validity of citation counting in science evaluation, Scientometrics, 71: 191–202.CrossRefGoogle Scholar
  11. Lilliefors, H. W. (1967), On the Kolmogoroff-Smirnov test for normality with mean and variance unknown, Journal of the American Statistical Association, 62: 399–402.CrossRefGoogle Scholar
  12. May, R. M. (1997), The scientific wealth of nations, Science, 275: 793–796.CrossRefGoogle Scholar
  13. Rush, A. J., Gullion, C. M., Prien, R. F. (1996), A curbstone to applications for National Institute of Mental Health grant support, Psychopharmacological Bulletin, 32: 311–320.Google Scholar
  14. Schui, G., Krampen, G. (2006), Bibliometrische Indikatoren als Evaluationskriterien: Möglichkeiten und Grenzen (Bibliometrical indicators as evaluation criteria: Possibilities and limits). In: G. Krampen, H. Zayer (Eds), Didaktik und Evaluation in der Psychologie (Teaching Methods and Evaluation in Psychology). Hogrefe, Göttingen (Germany), pp. 11–26.Google Scholar

Copyright information

© Springer Science+Business Media B.V. 2008

Authors and Affiliations

  1. 1.Department of Psychology and Institute for Psychology Information (ZPID — Leibniz Institute)University of TrierTrierGermany

Personalised recommendations