Advertisement

The COMET Rating Procedure in Practice: Some Conclusions

  • Felix Rauner
  • Lars Heinemann
  • Andrea Maurer
  • Bernd Haasler
  • Birgitt Erdwien
  • Thomas Martens
Chapter
  • 851 Downloads
Part of the Technical and Vocational Education and Training: Issues, Concerns and Prospects book series (TVET, volume 16)

Abstract

The quality of a measurement tool for the evaluation of professional competence and competence development depends largely on the question to what extent the ratings of the individual solutions of the participants by the evaluators (raters) converge or diverge (interrater reliability).

Keywords

Final Coefficient Test Assignments Rater Training Business Process Orientation Competence Model 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Asendorpf, J., & Wallbott, H. G. (1979). Maße der Beobachterübereinstimmung: Ein systematischer Vergleich. Zeitschrift für Sozialpsychologie, 10, 243–252.Google Scholar
  2. Bortz, J., & Döring, N. (2002). Forschungsmethoden und Evaluation. Berlin: Springer.Google Scholar
  3. Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420–428.CrossRefGoogle Scholar
  4. Wirtz, M., & Caspar, F. (2002). Beurteilerübereinstimmung und Beurteilerreliabilität. Göttingen: Hogrefe.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2013

Authors and Affiliations

  • Felix Rauner
    • 1
  • Lars Heinemann
    • 2
  • Andrea Maurer
    • 2
  • Bernd Haasler
    • 3
  • Birgitt Erdwien
    • 4
  • Thomas Martens
    • 5
  1. 1.TVET Research Group (I:BB)Universität BremenBremenGermany
  2. 2.TVET Research Group (IBB)University of BremenBremenGermany
  3. 3.Pädagogische Hochschule WeingartenWeingartenGermany
  4. 4.Oyten-SagehornGermany
  5. 5.LangenGermany

Personalised recommendations