Skip to main content
Log in

A Formal Proof of a Paradox Associated with Cohen’s Kappa

  • Published:
Journal of Classification Aims and scope Submit manuscript

Abstract

Suppose two judges each classify a group of objects into one of several nominal categories. It has been observed in the literature that, for fixed observed agreement between the judges, Cohen’s kappa penalizes judges with similar marginals compared to judges who produce different marginals. This paper presents a formal proof of this phenomenon.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • AICKIN, M. (1990), “Maximum Likelihood Estimation of Agreement in the Constant Predictive Model, and Its Relation to Cohen’s Kappa,” Biometrics, 26, 293–302.

    Article  MathSciNet  Google Scholar 

  • BAKEMAN, R., QUERA, V., MCARTHUR, D., and ROBINSON, B.F. (1997), “Detecting Sequential Patterns and Determining Their Reliability with Fallible Observers,” Psychological Methods, 2, 357–370.

    Article  Google Scholar 

  • BRENNAN, R.L., and PREDIGER, D.J. (1981), “Coefficient Kappa: Some Uses, Misuses, and Alternatives,” Educational and Psychological Measurement, 41, 687–699.

    Article  Google Scholar 

  • BYRT, T., BISHOP, J., and CARLIN, J.B. (1993), “Bias, Prevalence and Kappa,” Journal of Clinical Epidemiology, 46, 423–429.

    Article  Google Scholar 

  • CICCHETTI, D.V., and FEINSTEIN, A.R. (1990), “High Agreement but Low Kappa: II. Resolving the Paradoxes,” Journal of Clinical Epidemiology, 43, 551–558.

    Article  Google Scholar 

  • COHEN, J. (1960), “A Coefficient of Agreement for Nominal Scales,” Educational and Psychological Measurement, 20, 213–220.

    Article  Google Scholar 

  • CONGER, A.J. (1980), “Integration and Generalization of Kappas for Multiple Raters,” Psychological Bulletin, 88, 322–328.

    Article  Google Scholar 

  • DE MAST, J. (2007), “Agreement and Kappa-Type Indices,” The American Statistician, 61, 149–153.

    Google Scholar 

  • DOU, W., REN, Y., WU, Q., RUAN, S., CHEN, Y., BLOYET, D., and CONSTANS, J.-M. (2007), “Fuzzy Kappa for the Agreement Measure of Fuzzy Classifications,” Neurocomputing, 70, 726–734.

    Google Scholar 

  • GUGGENMOOS-HOLZMANN, I. (1996), “The Meaning of Kappa: Probabilistic Concepts of Reliability and Validity Revisited,” Journal of Clinical Epidemiology, 49, 775–783.

    Article  Google Scholar 

  • GWET, K.L. (2008), “Computing Inter-rater Reliability and Its Variance in the Presence of High Agreement,” British Journal of Mathematical and Statistical Psychology, 61, 29–48.

    Article  MathSciNet  Google Scholar 

  • FEINSTEIN, A.R., and CICCHETTI, D. V. (1990), “High Agreement but Low Kappa: I. The Problems of Two Paradoxes,” Journal of Clinical Epidemiology, 43, 543–549.

    Article  Google Scholar 

  • GOODMAN, L.A. (1991), “Measures, Models, and Graphical Displays in the Analysis of Cross-classified Data,” Journal of the American Statistical Association, 86, 1085–1111.

    Article  MATH  MathSciNet  Google Scholar 

  • HARDY, G.H., LITTLEWOOD, J. E., and PÓLYA, G. (1988), Inequalities (2nd ed.), Cambridge: Cambridge University Press.

    MATH  Google Scholar 

  • HUBERT, L. (1977), “Kappa Revisited,” Psychological Bulletin, 84, 289–297.

    Article  Google Scholar 

  • HUBERT, L.J., and ARABIE, P. (1985), “Comparing Partitions,” Journal of Classification, 2, 193–218.

    Article  Google Scholar 

  • KRAEMER, H.C. (1979), “Ramifications of a Population Model for κ as a Coefficient of Reliability,” Psychometrika, 44, 461–472.

    Article  MATH  MathSciNet  Google Scholar 

  • KRAEMER, H.C., PERIYAKOIL, V.S., and NODA, A. (2004), “Tutorial in Biostatistics: Kappa Coefficients in Medical Research,” Statistics in Medicine, 21, 2109–2129.

    Article  Google Scholar 

  • LANTZ, C.A., and NEBENZAHL, E. (1996), “Behavior and Interpretation of the κ Statistic: Resolution of the Paradoxes,” Journal of Clinical Epidemiology, 49, 431–434.

    Article  Google Scholar 

  • LIPSITZ, S.R., LAIRD, N.M., and BRENNAN, T.A. (1994), “Simple Moment Estimates of the κ-Coefficient and Its Variance,” Applied Statistics, 43, 309–323.

    Article  MATH  MathSciNet  Google Scholar 

  • MARTÍN ANDRÉS,A. and FEMIAMARZO, P. (2004), “Delta: A NewMeasure of Agreement Between Two Raters,” British Journal of Mathematical and Statistical Psychology, 57, 1–19.

    Article  MathSciNet  Google Scholar 

  • MARTÍN ANDRÉS, A. and FEMIA MARZO, P. (2008), “Chance-corrected Measures of Reliability and Validity in 2 × 2 Tables,” Communications in Statistics, Theory and Methods, 37, 760–772.

    Article  MATH  MathSciNet  Google Scholar 

  • NELSON, J.C., and PEPE, M.S. (2000), “Statistical Description of Interrater Variability in Ordinal Ratings,” Statistical Methods in Medical Research, 9, 475–496.

    Article  MATH  Google Scholar 

  • SIM, J., and WRIGHT, C.C. (2005). The Kappa Statistic in Reliability Studies: Use, Interpretation, and Sample Size Requirements, Physical Therapy, 85, 257–268.

    Google Scholar 

  • STEINLEY, D. (2004), “Properties of the Hubert-Arabie Adjusted Rand Index,” Psychological Methods, 9, 386–396.

    Article  Google Scholar 

  • THOMPSON, W.D., and WALTER, S.D. (1988), “A Reappraisal of the Kappa Coefficient,” Journal of Clinical Epidemiology, 41, 949–958.

    Article  Google Scholar 

  • VACH,W. (2005), “The Dependence of Cohen’s Kappa on the Prevalence Does not Matter,” Journal of Clinical Epidemiology, 58, 655–661.

    Article  Google Scholar 

  • VON EYE, A., and VON EYE, M. (2008), “On the Marginal Dependency of Cohen’s κ,” European Psychologist, 13, 305–315.

    Article  Google Scholar 

  • WACKERLY, D.D., and ROBINSON, D.H. (1983), “A More Powerful Method for Testing Agreement Between a Judge and a Known Standard,” Psychometrika, 48, 183–193.

    Article  MATH  Google Scholar 

  • WARRENS, M.J. (2008a), “On Similarity Coefficients for 2 × 2 Tables and Correction for Chance,” Psychometrika, 73, 487–502.

    Article  MathSciNet  Google Scholar 

  • WARRENS, M.J. (2008b), “On the Equivalence of Cohen’s Kappa and the Hubert-Arabie Adjusted Rand Index,” Journal of Classification, 25, 177–183.

    Article  MATH  Google Scholar 

  • WARRENS, M.J. (2008c), “On Association Coefficients for 2 × 2 Tables and Properties That Do Not Depend on the Marginal Distributions,” Psychometrika, 73, 777–789.

    Article  MATH  MathSciNet  Google Scholar 

  • WARRENS, M.J. (2008d), “On the Indeterminacy of Resemblance Measures for (Presence/Absence) Data,” Journal of Classification, 25, 125–136.

    Article  MATH  MathSciNet  Google Scholar 

  • WARRENS, M.J. (2008e), “Bounds of ResemblanceMeasures for Binary (Presence/Absence) Variables,” Journal of Classification, 25, 195–208.

    Article  MATH  MathSciNet  Google Scholar 

  • WARRENS, M.J. (2009), “k-Adic Similarity Coefficients for Binary (Presence/Absence) Data,” Journal of Classification, 26, 227–245.

    Article  Google Scholar 

  • WARRENS, M.J. (2010), “Inequalities Between Kappa and Kappa-like Statistics for k × k Tables,” Psychometrika, 75, 176–185.

    Article  MATH  Google Scholar 

  • ZWICK, R. (1988), “Another Look at Interrater Agreement,” Psychological Bulletin, 103, 374–378.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthijs J. Warrens.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Warrens, M.J. A Formal Proof of a Paradox Associated with Cohen’s Kappa. J Classif 27, 322–332 (2010). https://doi.org/10.1007/s00357-010-9060-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00357-010-9060-x

Keywords

Navigation