Encyclopedia of Clinical Neuropsychology

2011 Edition
| Editors: Jeffrey S. Kreutzer, John DeLuca, Bruce Caplan

Inter-rater Reliability

Reference work entry
DOI: https://doi.org/10.1007/978-0-387-79948-3_1203

Synonyms

Definition

Inter-rater reliability is the extent to which two or more raters (or observers, coders, examiners) agree. It addresses the issue of consistency of the implementation of a rating system. Inter-rater reliability can be evaluated by using a number of different statistics. Some of the more common statistics include: percentage agreement, kappa, product–moment correlation, and intraclass correlation coefficient. High inter-rater reliability values refer to a high degree of agreement between two examiners. Low inter-rater reliability values refer to a low degree of agreement between two examiners. Examples of the use of inter-rater reliability in neuropsychology include (a) the evaluation of the consistency of clinician’s neuropsychological diagnoses, (b) the evaluation of scoring parameters on drawing tasks such as the Rey Complex Figure Test or Visual Reproduction subtest, and (c) the...

This is a preview of subscription content, log in to check access.

References and Readings

  1. Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.) Upper Saddle River, NJ: Prentice Hall.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.British Columbia Mental Health and Addiction Services University of British ColumbiaPHSA Research and NetworksVancouverCanada