The evaluation of university departments and their scientists: Some general considerations with reference to exemplary bibliometric publication and citation analyses for a Department of psychology
- 218 Downloads
In reference to an exemplary bibliometric publication and citation analysis for a University Department of Psychology, some general conceptual and methodological considerations on the evaluation of university departments and their scientists are presented. Data refer to publication and citation-by-others analyses (PsycINFO, PSYNDEX, SSCI, and SCI) for 36 professorial and non-professorial scientists from the tenure staff of the department under study, as well as confidential interviews on self-and colleagues-perceptions with seven of the sample under study. The results point at (1) skewed (Pareto-) distributions of all bibliometric variables demanding nonparametrical statistical analyses, (2) three personally identical outliers which must be excluded from some statistical analyses, (3) rather low rank-order correlations of publication and citation frequencies having approximately 15% common variance, (4) only weak interdependences of bibliometric variables with age, occupational experience, gender, academic status, and engagement in basic versus applied research, (5) the empirical appropriateness and utility of a normative typological model for the evaluation of scientists’ research productivity and impact, which is based on cross-classifications with reference to the number of publications and the frequency of citations by other authors, and (6) low interrater reliabilities and validity of ad hoc evaluations within the departments’ staff. Conclusions refer to the utility of bibliometric data for external peer reviewing and for feedback within scientific departments, in order to make colleague-perceptions more reliable and valid.
KeywordsCitation Analysis Publication Activity Social Science Citation Index Citation Frequency Academic Status
Unable to display preview. Download preview PDF.
- Breznitz, S. (1967), Confidence estimation of group norm as a function of subjective conformity, Psychonomic Science, 7: 399–400.Google Scholar
- Cole, J., Cole, S. (1971), Measuring the quality of sociological research: Problems in the use of the Science Citation Index, American Sociologist, 6: 23–29.Google Scholar
- Daniel, H.-D. (1986), Die Vermessung der Forschung (The measurement of research). In: H. Methner (Ed.), Psychologie in Betrieb und Verwaltung (Psychology in Oganisations and Administrations). Deutscher Psychologen Verlag, Bonn (Germany), pp. 208–218.Google Scholar
- Gustafson, T. (1975), The controversy over peer review, Science, 190: 1060–1066.Google Scholar
- Kamenz, U., Wehrle, M. (2007). Professor Untat (Professor Not-work), Econ Verlag, Berlin, Germany.Google Scholar
- Krampen, G., Montada, L. (2002), Wissenschaftsforschung in der Psychologie (Science Research in Psychology), Hogrefe, Göttingen, Germany.Google Scholar
- Rush, A. J., Gullion, C. M., Prien, R. F. (1996), A curbstone to applications for National Institute of Mental Health grant support, Psychopharmacological Bulletin, 32: 311–320.Google Scholar
- Schui, G., Krampen, G. (2006), Bibliometrische Indikatoren als Evaluationskriterien: Möglichkeiten und Grenzen (Bibliometrical indicators as evaluation criteria: Possibilities and limits). In: G. Krampen, H. Zayer (Eds), Didaktik und Evaluation in der Psychologie (Teaching Methods and Evaluation in Psychology). Hogrefe, Göttingen (Germany), pp. 11–26.Google Scholar