Inter-rater Reliability
- 5 Citations
- 7.4k Downloads
Synonyms
Definition
Inter-rater reliability is the extent to which two or more raters (or observers, coders, examiners) agree. It addresses the issue of consistency of the implementation of a rating system. Inter-rater reliability can be evaluated by using a number of different statistics. Some of the more common statistics include: percentage agreement, kappa, product–moment correlation, and intraclass correlation coefficient. High inter-rater reliability values refer to a high degree of agreement between two examiners. Low inter-rater reliability values refer to a low degree of agreement between two examiners. Examples of the use of inter-rater reliability in neuropsychology include (a) the evaluation of the consistency of clinician’s neuropsychological diagnoses, (b) the evaluation of scoring parameters on drawing tasks such as the Rey Complex Figure Test or Visual Reproduction subtest, and (c) the...
References and Readings
- Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.) Upper Saddle River, NJ: Prentice Hall.Google Scholar