Computing Cohen’s kappa coefficients using SPSS MATRIX

  • Claude A. M. Valiquette
  • Alain D. Lesage
  • Mireille Cyr
  • Jean Toupin
Program Abstracts/Algorithms
  • 859 Downloads

Abstract

This short paper proposes a general computing strategy to compute Kappa coefficients using the SPSS MATRIX routine. The method is based on the following rationale. If the contingency table is considered as a square matrix, then the observed proportions of agreement lie in the main diagonal’s cells, and their sum equals the trace of the matrix, whereas the proportions of agreement expected by chance are the joint product of marginals. The generalization to weighted kappa, which requires an additional square matrix of disagreement weights, both matrices having the same order, becomes possible by the use of the Hadamard product-that is, the elementwise direct product of two matrices.

References

  1. Antonak, R. F. (1977). A computer program to compute measures of response agreement for nominal scale data obtained from two judges.Behavior Research Methods & Instrumentation,9, 553.Google Scholar
  2. Berk, R. A., &Campbell, K. L. (1976). A FORTRAN program for Cohen’s kappa coefficient of observer agreement.Behavior Research Methods & Instrumentation,8, 396.Google Scholar
  3. Bloor, R. N. (1983). A computer program to determine interrater reliability for dichotomous-ordinal rating scales.Behavior Research Methods & Instrumentation,15, 615.Google Scholar
  4. Burns, E., &Cavallaro, C. (1982). A computer program to determine interobserver reliability statistics.Behavior Research Methods & Instrumentation,14, 42.Google Scholar
  5. Chan, T. S. C. (1987). A DBase III program that performs significance testing for the kappa coefficient.Behavior Research Methods, Instruments, & Computers,19, 53–54.Google Scholar
  6. Cicchetti, D. V., Showalter, D., &McCarthy, P. (1990). A computer program for calculating subject-by-subject kappa or weighted kappa coefficients.Educational & Psychological Measurement,50, 153–158.CrossRefGoogle Scholar
  7. Cohen, J. (1960). A coefficient of agreement for nominal scales.Educational & Psychological Measurement,20, 37–46.CrossRefGoogle Scholar
  8. Cohen, J. (1968). Weighted kappa: Nomina: scale agreement with provision for scaled disagreement or partial credit.Psychological Bulletin,70, 213–220.PubMedCrossRefGoogle Scholar
  9. Collis, G. M. (1985). Kappa, measures of marginal symmetry and intraclass correlations.Educational & Psychological Measurement,45, 55–62.CrossRefGoogle Scholar
  10. Conger, A. J, &Ward, D. G. (1984). Agreement among 2×2 agreement indices.Educational & Psychological Measurement,44, 301–313.CrossRefGoogle Scholar
  11. Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters.Psychological Bulletin,76, 378–382.CrossRefGoogle Scholar
  12. Fleiss, J. L., &Cohen, J. (1973). The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability.Educational & Psychological Measurement,33, 613–619.CrossRefGoogle Scholar
  13. Fleiss, J. L., Nee, J. C, &Landis, J. R. (1979). Large sample variance of kappa in the ease of different sets of raters.Psychological Bulletin,86, 974–977.CrossRefGoogle Scholar
  14. Hubert, L. J. (1987).Assignment methods in combinatorial data analysis. New York: Marcel Dekker.Google Scholar
  15. Landis, J., &Koch, G. G. (1977). The measurement of observer agreement for categorical data.Biometrics,33, 159–174.PubMedCrossRefGoogle Scholar
  16. Norusis, M. J. (1990a).SPSS base system user’s guide. Chicago: SPSS Inc.Google Scholar
  17. Norusis, M. J. (1990b).SPSS advanced statistics user’s guide. Chicago: SPSS Inc.Google Scholar
  18. Rae, G. (1984). On measuring agreement among several judges on the presence or absence of a trait.Educational & Psychological Measurement,44, 247–253.CrossRefGoogle Scholar
  19. Siegel, S., &Castellan, N. J, Jr. (1988).Nonparametric statistics for the behavioral sciences (2nd ed.) New York: McGraw Hill.Google Scholar
  20. Watkins, M. W., &Larimer, L. D. (1980). Interrater agreement statistics with the microcomputer.Behavior Research Methods & Instrumentation,12, 466.Google Scholar
  21. Wixon, D. R. (1979). Cohen’s kappa coefficient of observer agreement: A BASIC program for minicomputers.Behavior Research Methods & Instrumentation,11, 602.Google Scholar

Copyright information

© Psychonomic Society, Inc. 1994

Authors and Affiliations

  • Claude A. M. Valiquette
    • 1
  • Alain D. Lesage
    • 1
  • Mireille Cyr
    • 1
  • Jean Toupin
    • 2
  1. 1.Department of PsychologyUniversité de MontréalMontréalCanada
  2. 2.University of SherbrookeSherbrooke

Personalised recommendations