Machine Learning

, Volume 45, Issue 2, pp 171–186 | Cite as

A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems

  • David J. Hand
  • Robert J. Till

Abstract

The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.

receiver operating characteristic ROC curve AUC Gini index error rate 

References

  1. Adams, N. M.& Hand, D. J. (1999). Comparing classifiers when the misallocation costs are uncertain. Pattern Recognition, 32, 1139–1147.Google Scholar
  2. Adams, N. M.& Hand, D. J. (2000). Improving the practice of classifier performance assessment. Neural Computation, 12, 305–311.Google Scholar
  3. Blake, C.& Merz, C. J. (1998). UCI Repository of Machine Learning Databases. Irvine, CA: University of California, Department of Information and Computer Science. [www.ics.uci.edu/?mlearn/MLRepository.html].Google Scholar
  4. Bradley, A. P. (1997). The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognition, 30, 1145–1159.Google Scholar
  5. Breiman, L., Freidman, J. H., Olshen, R. A.,& Stone, C. J. (1984). Classification and Regression Trees. Belmont, California: Wadsworth.Google Scholar
  6. Efron, B.& Tibshirani, R. J. (1993). An Introduction to The bootstrap. London: Chapman&Hall.Google Scholar
  7. Efron, B.& Tibshirani, R. J. (1995). Cross-validation and the bootstrap: Estimating the error rate of a prediction rule. Technical Report 176. Stanford, CA: Stanford University, Department of Statistics.Google Scholar
  8. Hand, D. J. (1997). Construction and Assessment of Classification Rules. Chichester: Wiley.Google Scholar
  9. Hand, D. J. (2000). Measuring diagnostic accuracy of statistical prediction rules. Statistica Neerlandica, 53, 1–14.Google Scholar
  10. Hanley, J. A.& McNeil, B. J. (1982). The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143, 29–36.Google Scholar
  11. Piper, J.& Granum, E. (1989). On fully automatic feature measurement for banded chromosome classification. Cytometry, 10, 1–14.Google Scholar
  12. Provost, F.& Fawcett, T. (1997). Analysis and visualization of classifier performance: Comparison under imprecise class and cost distributions. In Proceedings of the Third International Conference on Knowledge Discovery and Data Mining (pp. 43–48). Menlo Park, CA: AAAI Press.Google Scholar
  13. Provost, F. J.& Fawcett, T. (1998). Robust classification systems for imprecise environments. In Proceedings of the Fifteenth National Conference on Artificial Intelligence (pp. 706–713). Madison, WI: AAAI Press.Google Scholar
  14. Provost, F. J., Fawcett, T.,& Kohavi, R. (1998). The case against accuracy estimation for comparing classifiers. In Proceedings of the Fifteenth International Conference on Machine Learning (pp. 445–453). Madison, WI: Morgan Kaufmann.Google Scholar
  15. Scott, M. (1999). Parcel: Feature selection in variable cost domains. Doctoral Dissertation, Engineering Department, Cambridge University, UK.Google Scholar
  16. Taylor, P. C.& Hand, D. J. (1999). Finding superclassifications with acceptable misclassification rates. Journal of Applied Statistics, 26, 579–590.Google Scholar
  17. Turney, P. (1996). Cost sensitive learning bibliography. [www.iit.nrc.ca/bibliographies/cost-sensitive.html]Google Scholar
  18. Venables, W. N.& Ripley, B. D. (1994). Modern Applied Statistics with S-Plus. New York: Springer-Verlag.Google Scholar
  19. Zweig, M. H.& Campbell, G. (1993). Receiver-operating characteristic (ROC) plots. Clinical Chemistry, 29, 561–577.Google Scholar

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • David J. Hand
    • 1
  • Robert J. Till
    • 1
  1. 1.Department of MathematicsImperial CollegeLondonUK

Personalised recommendations