Advertisement

Journal of Classification

, Volume 27, Issue 3, pp 322–332 | Cite as

A Formal Proof of a Paradox Associated with Cohen’s Kappa

  • Matthijs J. Warrens
Article

Abstract

Suppose two judges each classify a group of objects into one of several nominal categories. It has been observed in the literature that, for fixed observed agreement between the judges, Cohen’s kappa penalizes judges with similar marginals compared to judges who produce different marginals. This paper presents a formal proof of this phenomenon.

Keywords

Inter-rater reliability Nominal agreement Rearrangement inequality Marginal homogeneity Marginal asymmetry 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. AICKIN, M. (1990), “Maximum Likelihood Estimation of Agreement in the Constant Predictive Model, and Its Relation to Cohen’s Kappa,” Biometrics, 26, 293–302.CrossRefMathSciNetGoogle Scholar
  2. BAKEMAN, R., QUERA, V., MCARTHUR, D., and ROBINSON, B.F. (1997), “Detecting Sequential Patterns and Determining Their Reliability with Fallible Observers,” Psychological Methods, 2, 357–370.CrossRefGoogle Scholar
  3. BRENNAN, R.L., and PREDIGER, D.J. (1981), “Coefficient Kappa: Some Uses, Misuses, and Alternatives,” Educational and Psychological Measurement, 41, 687–699.CrossRefGoogle Scholar
  4. BYRT, T., BISHOP, J., and CARLIN, J.B. (1993), “Bias, Prevalence and Kappa,” Journal of Clinical Epidemiology, 46, 423–429.CrossRefGoogle Scholar
  5. CICCHETTI, D.V., and FEINSTEIN, A.R. (1990), “High Agreement but Low Kappa: II. Resolving the Paradoxes,” Journal of Clinical Epidemiology, 43, 551–558.CrossRefGoogle Scholar
  6. COHEN, J. (1960), “A Coefficient of Agreement for Nominal Scales,” Educational and Psychological Measurement, 20, 213–220.CrossRefGoogle Scholar
  7. CONGER, A.J. (1980), “Integration and Generalization of Kappas for Multiple Raters,” Psychological Bulletin, 88, 322–328.CrossRefGoogle Scholar
  8. DE MAST, J. (2007), “Agreement and Kappa-Type Indices,” The American Statistician, 61, 149–153.Google Scholar
  9. DOU, W., REN, Y., WU, Q., RUAN, S., CHEN, Y., BLOYET, D., and CONSTANS, J.-M. (2007), “Fuzzy Kappa for the Agreement Measure of Fuzzy Classifications,” Neurocomputing, 70, 726–734.Google Scholar
  10. GUGGENMOOS-HOLZMANN, I. (1996), “The Meaning of Kappa: Probabilistic Concepts of Reliability and Validity Revisited,” Journal of Clinical Epidemiology, 49, 775–783.CrossRefGoogle Scholar
  11. GWET, K.L. (2008), “Computing Inter-rater Reliability and Its Variance in the Presence of High Agreement,” British Journal of Mathematical and Statistical Psychology, 61, 29–48.CrossRefMathSciNetGoogle Scholar
  12. FEINSTEIN, A.R., and CICCHETTI, D. V. (1990), “High Agreement but Low Kappa: I. The Problems of Two Paradoxes,” Journal of Clinical Epidemiology, 43, 543–549.CrossRefGoogle Scholar
  13. GOODMAN, L.A. (1991), “Measures, Models, and Graphical Displays in the Analysis of Cross-classified Data,” Journal of the American Statistical Association, 86, 1085–1111.zbMATHCrossRefMathSciNetGoogle Scholar
  14. HARDY, G.H., LITTLEWOOD, J. E., and PÓLYA, G. (1988), Inequalities (2nd ed.), Cambridge: Cambridge University Press.zbMATHGoogle Scholar
  15. HUBERT, L. (1977), “Kappa Revisited,” Psychological Bulletin, 84, 289–297.CrossRefGoogle Scholar
  16. HUBERT, L.J., and ARABIE, P. (1985), “Comparing Partitions,” Journal of Classification, 2, 193–218.CrossRefGoogle Scholar
  17. KRAEMER, H.C. (1979), “Ramifications of a Population Model for κ as a Coefficient of Reliability,” Psychometrika, 44, 461–472.zbMATHCrossRefMathSciNetGoogle Scholar
  18. KRAEMER, H.C., PERIYAKOIL, V.S., and NODA, A. (2004), “Tutorial in Biostatistics: Kappa Coefficients in Medical Research,” Statistics in Medicine, 21, 2109–2129.CrossRefGoogle Scholar
  19. LANTZ, C.A., and NEBENZAHL, E. (1996), “Behavior and Interpretation of the κ Statistic: Resolution of the Paradoxes,” Journal of Clinical Epidemiology, 49, 431–434.CrossRefGoogle Scholar
  20. LIPSITZ, S.R., LAIRD, N.M., and BRENNAN, T.A. (1994), “Simple Moment Estimates of the κ-Coefficient and Its Variance,” Applied Statistics, 43, 309–323.zbMATHCrossRefMathSciNetGoogle Scholar
  21. MARTÍN ANDRÉS,A. and FEMIAMARZO, P. (2004), “Delta: A NewMeasure of Agreement Between Two Raters,” British Journal of Mathematical and Statistical Psychology, 57, 1–19.CrossRefMathSciNetGoogle Scholar
  22. MARTÍN ANDRÉS, A. and FEMIA MARZO, P. (2008), “Chance-corrected Measures of Reliability and Validity in 2 × 2 Tables,” Communications in Statistics, Theory and Methods, 37, 760–772.zbMATHCrossRefMathSciNetGoogle Scholar
  23. NELSON, J.C., and PEPE, M.S. (2000), “Statistical Description of Interrater Variability in Ordinal Ratings,” Statistical Methods in Medical Research, 9, 475–496.zbMATHCrossRefGoogle Scholar
  24. SIM, J., and WRIGHT, C.C. (2005). The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements, Physical Therapy, 85, 257–268.Google Scholar
  25. STEINLEY, D. (2004), “Properties of the Hubert-Arabie Adjusted Rand Index,” Psychological Methods, 9, 386–396.CrossRefGoogle Scholar
  26. THOMPSON, W.D., and WALTER, S.D. (1988), “A Reappraisal of the Kappa Coefficient,” Journal of Clinical Epidemiology, 41, 949–958.CrossRefGoogle Scholar
  27. VACH,W. (2005), “The Dependence of Cohen’s Kappa on the Prevalence Does not Matter,” Journal of Clinical Epidemiology, 58, 655–661.CrossRefGoogle Scholar
  28. VON EYE, A., and VON EYE, M. (2008), “On the Marginal Dependency of Cohen’s κ,” European Psychologist, 13, 305–315.CrossRefGoogle Scholar
  29. WACKERLY, D.D., and ROBINSON, D.H. (1983), “A More Powerful Method for Testing Agreement Between a Judge and a Known Standard,” Psychometrika, 48, 183–193.zbMATHCrossRefGoogle Scholar
  30. WARRENS, M.J. (2008a), “On Similarity Coefficients for 2 × 2 Tables and Correction for Chance,” Psychometrika, 73, 487–502.CrossRefMathSciNetGoogle Scholar
  31. WARRENS, M.J. (2008b), “On the Equivalence of Cohen’s Kappa and the Hubert-Arabie Adjusted Rand Index,” Journal of Classification, 25, 177–183.zbMATHCrossRefGoogle Scholar
  32. WARRENS, M.J. (2008c), “On Association Coefficients for 2 × 2 Tables and Properties That Do Not Depend on the Marginal Distributions,” Psychometrika, 73, 777–789.zbMATHCrossRefMathSciNetGoogle Scholar
  33. WARRENS, M.J. (2008d), “On the Indeterminacy of Resemblance Measures for (Presence/Absence) Data,” Journal of Classification, 25, 125–136.zbMATHCrossRefMathSciNetGoogle Scholar
  34. WARRENS, M.J. (2008e), “Bounds of ResemblanceMeasures for Binary (Presence/Absence) Variables,” Journal of Classification, 25, 195–208.zbMATHCrossRefMathSciNetGoogle Scholar
  35. WARRENS, M.J. (2009), “k-Adic Similarity Coefficients for Binary (Presence/Absence) Data,” Journal of Classification, 26, 227–245.CrossRefGoogle Scholar
  36. WARRENS, M.J. (2010), “Inequalities Between Kappa and Kappa-like Statistics for k × k Tables,” Psychometrika, 75, 176–185.zbMATHCrossRefGoogle Scholar
  37. ZWICK, R. (1988), “Another Look at Interrater Agreement,” Psychological Bulletin, 103, 374–378.CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Institute of Psychology, Unit Methodology and StatisticsLeiden UniversityLeidenThe Netherlands

Personalised recommendations