Advertisement

Journal of Classification

, Volume 31, Issue 2, pp 179–193 | Cite as

Corrected Zegers-ten Berge Coefficients Are Special Cases of Cohen’s Weighted Kappa

  • Matthijs J. Warrens
Article

Abstract

It is shown that if cell weights may be calculated from the data the chance-corrected Zegers-ten Berge coefficients for metric scales are special cases of Cohen’s weighted kappa. The corrected coefficients include Pearson’s product-moment correlation, Spearman’s rank correlation and the intraclass correlation ICC(3, 1).

Keywords

Inter-rater reliability Inter-rater agreement Cohen’s kappa Cohen’s weighted kappa Product-moment correlation Intraclass correlation ICC(2, 1) ICC(3,1) Spearman’s rank correlation 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. ABRAIRA, V., and PÉREZ DE VARGAS, A. (1999), “Generalization of the Kappa Coefficient for Ordinal Categorical Data, Multiple Observers and Incomplete Designs”, Q ÜESTIIÓ, 23, 561–571.MATHGoogle Scholar
  2. BERRY, K.J., and MIELKE, P.W. (1988), “A Generalization of Cohen’s Kappa Agreement Measure to Interval Measurement and Multiple Raters”, Educational and Psychological Measurement, 48, 921–933.CrossRefGoogle Scholar
  3. BERRY, K.J., JOHNSTON, J.E., and MIELKE, P.W. (2008), “Weighted Kappa for Multiple Raters”, Perceptual and Motor Skills, 107, 837–848.Google Scholar
  4. CICCHETTI, D V. (1976), “Assessing Inter-rater Reliability for Rating Scales: Resolving Some Basic Issues”, British Journal of Psychiatry, 129, 452–456.CrossRefGoogle Scholar
  5. CICCHETTI, D.V., and ALLISON, T. (1971), “A New Procedure for Assessing Reliability of Scoring EEG Sleep Recordings”, The American Journal of EEG Technology, 11, 101–110.Google Scholar
  6. CICCHETTI, D., BRONEN, R., SPENCER, S., HAUT, S., BERG, A., OLIVER, P., and TYRER, P. (2006), “Rating Scales, Scales of Measurement, Issues of Reliability. Resolving Some Critical Issues for Clinicians and Researchers”, The Journal of Nervous and Mental Disease, 194, 557–564.CrossRefGoogle Scholar
  7. COHEN, J. (1960), “A Coefficient of Agreement for Nominal Scales”, Educational and Psychological Measurement, 20, 37–46.CrossRefGoogle Scholar
  8. COHEN, J. (1968), “Weighted Kappa: Nominal Scale Agreement With Provision for Scaled Disagreement or Partial Credit”, Psychological Bulletin, 70, 213–220.CrossRefGoogle Scholar
  9. CONGER, A.J. (1980), “Integration and Generalization of Kappas for Multiple Raters”, Psychological Bulletin, 88, 322–328.CrossRefGoogle Scholar
  10. CREWSON, P.E. (2005), “Fundamentals of Clinical Research for Radiologists. Reader Agreement Studies”, American Journal of Roentgenology, 184, 1391–1397.CrossRefGoogle Scholar
  11. DAVIES, M., and FLEISS, J.L. (1982), “Measuring Agreement for Multinomial Data”, Biometrics, 38, 1047–1051.MATHCrossRefGoogle Scholar
  12. FAGOT, R.F. (1993), “A Generalized Family of Coefficients of Relational Agreement for Numerical Scales”, Psychometrika, 58, 357–370.CrossRefGoogle Scholar
  13. FLEISS, J.L., and COHEN, J. (1973), “The Equivalence of Weighted Kappa and the Intraclass Correlation Coefficient as Measures of Reliability”, Educational and Psychological Measurement, 33, 613–619.CrossRefGoogle Scholar
  14. GRAHAM, P., and JACKSON, R. (1993), “The Analysis of Ordinal Agreement Data: Beyond Weighted Kappa”, Journal of Clinical Epidemiology, 46, 1055–1062.CrossRefGoogle Scholar
  15. HEUVELMANS, A.P.J.M., and SANDERS, P.F. (1993), “Beoordelaarsovereenstemming”, in Psychometrie in de Praktijk, eds. T.J.H.M. Eggen and P.F. Sanders, Arnhem: Cito Instituut voor Toestontwikkeling, pp. 443–470.Google Scholar
  16. HUBERT, L. (1977), “Kappa Revisited”, Psychological Bulletin, 84, 289–297.CrossRefGoogle Scholar
  17. JANSON, H., and OLSSON, U. (2001), “A Measure of Agreement for Interval or Nominal Multivariate Observations”, Educational and Psychological Measurement, 61, 277–289.MathSciNetCrossRefGoogle Scholar
  18. JOBSON, J.D. (1976), “A Coefficient of Equality for Questionnaire Items with Interval Scales”, Educational and Psychological Measurement, 36, 271–274.CrossRefGoogle Scholar
  19. LIGHT, R.J. (1971), “Measures of Response Agreement for Qualitative Data: Some Generalizations and Alternatives”, Psychological Bulletin, 76, 365–377.CrossRefGoogle Scholar
  20. MACLURE, M., and WILLETT, W C. (1987), “Misinterpretation and Misuse of the Kappa Statistic”, Journal of Epidemiology, 126, 161–169.CrossRefGoogle Scholar
  21. MCGRAW, K.O., and WONG, S.P. (1996), “Forming Inferences About Some Intraclass Correlation Coefficients”, Psychological Methods, 1, 30–46.CrossRefGoogle Scholar
  22. MIELKE, P.W., BERRY, K.J., and JOHNSTON, J.E. (2007), “The Exact Variance of Weighted Kappa With Multiple Raters”, Psychological Reports, 101, 655–660.Google Scholar
  23. MIELKE, P.W., BERRY, K.J., and JOHNSTON, J.E. (2008), “Resampling Probability Values for Weighted Kappa With Multiple Raters”, Psychological Reports, 102, 606–613.CrossRefGoogle Scholar
  24. POPPING, R. (1983), “Overeenstemmingsmaten voor Nominale Data”, PhD thesis, Rijksuniversiteit Groningen, Groningen.Google Scholar
  25. POPPING, R. (2010), “Some Views on Agreement to Be Used in Content Analysis Studies”, Quality & Quantity, 44, 1067-1078.CrossRefGoogle Scholar
  26. SCHUSTER, C. (2004), “A Note on the Interpretation of Weighted Kappa and Its Relations to Other Rater Agreement Statistics for Metric Scales”, Educational and Psychological Measurement, 64, 243–253.MathSciNetCrossRefGoogle Scholar
  27. SCHUSTER, C., and SMITH, D.A. (2005), “Dispersion Weighted Kappa: An Integrative Framework for Metric and Nominal Scale Agreement Coefficients”, Psychometrika, 70, 135-1-46.MathSciNetCrossRefGoogle Scholar
  28. SHROUT, P.E., and FLEISS, J.L. (1979), “Intraclass Correlations: Uses in Assessing Rater Reliability”, Psychological Bulletin, 86, 420–428.CrossRefGoogle Scholar
  29. STINE, W.W. (1989), “Interobserver Relational Agreement”, Psychological Bulletin, 106, 341–347.CrossRefGoogle Scholar
  30. VANBELLE, S., and ALBERT, A. (2009a), “Agreement Between Two Independent Groups of Raters”, Psychometrika, 74, 477–491.MATHMathSciNetCrossRefGoogle Scholar
  31. VANBELLE, S., and ALBERT, A. (2009b), “A Note on the Linearly Weighted Kappa Coefficient for Ordinal Scales”, Statistical Methodology, 6, 157–163.MATHMathSciNetCrossRefGoogle Scholar
  32. VON EYE, A., and MUN, E.Y. (2006), Analyzing Rater Agreement. Manifest Variable Methods, New Jersey USA: Lawrence Erlbaum Associates.Google Scholar
  33. WARRENS, M.J. (2010), “Inequalities Between Multi-rater Kappas”, Advances in Data Analysis and Classification, 4, 271–286.MATHMathSciNetCrossRefGoogle Scholar
  34. WARRENS, M.J. (2011), “Cohen’s Linearly Weighted Kappa Is a Weighted Average of 2 × 2 Kappas”, Psychometrika, 76, 471-486.MATHMathSciNetCrossRefGoogle Scholar
  35. WARRENS, M.J. (2012a), “Some Paradoxical Results for the Quadratically Weighted Kappa”, Psychometrika, 77, 315–323.MATHMathSciNetCrossRefGoogle Scholar
  36. WARRENS, M.J. (2012b), “A Family of Multi-rater Kappas That Can Always Be Increased and Decreased by Combining Categories”, Statistical Methodology, 9, 330–340.MathSciNetCrossRefGoogle Scholar
  37. WARRENS, M.J. (2012c), “Equivalences of Weighted Kappas for Multiple Raters”, Statistical Methodology, 9, 407–422.MathSciNetCrossRefGoogle Scholar
  38. WARRENS, M.J. (2013), “Conditional Inequalities Between Cohen’s Kappa and Weighted Kappas”, Statistical Methodology, 10, 14–22.MathSciNetCrossRefGoogle Scholar
  39. WINER, B.L. (1971), Statistical Principles in Experimental Design (2nd ed.), New York: McGraw-Hill.Google Scholar
  40. ZEGERS, F.E. (1986a), A General Family of Association Coefficients, Groningen, Netherlands: Boomker.Google Scholar
  41. ZEGERS, F.E. (1986b), “A Family of Chance-corrected Association Coefficients for Metric Scales”, Psychometrika, 51, 559–562.MathSciNetCrossRefGoogle Scholar
  42. ZEGERS, F.E. (1991), “Coefficients for Interrater Agreement”, Applied Psychological Measurement, 15, 321–333.CrossRefGoogle Scholar
  43. ZEGERS, F.E., and TEN BERGE, J.M.F. (1985), “A Family of Association Coefficients for Metric Scales”, Psychometrika, 50, 17–24.MathSciNetCrossRefGoogle Scholar

Copyright information

© Classification Society of North America 2014

Authors and Affiliations

  1. 1.Institute of Psychology, Unit Methodology and StatisticsLeiden UniversityLeidenThe Netherlands

Personalised recommendations