Evaluating Misclassifications in Imbalanced Data

  • William Elazmeh
  • Nathalie Japkowicz
  • Stan Matwin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4212)

Abstract

Evaluating classifier performance with ROC curves is popular in the machine learning community. To date, the only method to assess confidence of ROC curves is to construct ROC bands. In the case of severe class imbalance with few instances of the minority class, ROC bands become unreliable. We propose a generic framework for classifier evaluation to identify a segment of an ROC curve in which misclassifications are balanced. Confidence is measured by Tango’s 95%-confidence interval for the difference in misclassification in both classes. We test our method with severe class imbalance in a two-class problem. Our evaluation favors classifiers with low numbers of misclassifications in both classes. Our results show that the proposed evaluation method is more confident than ROC bands.

References

  1. 1.
    Ling, C.X., Huang, J., Zang, H.: Auc: a better measure than accuracy in comparing learning algorithms. In: Canadian Conference on AI, pp. 329–341 (2003)Google Scholar
  2. 2.
    Provost, F., Fawcett, T.: Analysis and visualization f classifier performance: Comparison under imprecise class and cost distributions. In: The Third International Conference on Knowledge Discovery and Data Mining, pp. 34–48 (1997)Google Scholar
  3. 3.
    Cohen, W.W., Schapire, R.E., Singer, Y.: Learning to order things. Journal of Artificial Intelligence Research (10), 243–270 (1999)Google Scholar
  4. 4.
    Swets, J.: Measuring the accuracy of diagnostic systems. Science (240), 1285–1293 (1988)Google Scholar
  5. 5.
    Drummond, C., Holte, R.C.: Explicitly representing expected cost: An alternative to roc representation. In: The Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 198–207 (2000)Google Scholar
  6. 6.
    Drummond, C., Holte, R.C.: What roc curves can’t do (and cost curves can). In: ECAI 2004 Workshop on ROC Analysis in AI (2004)Google Scholar
  7. 7.
    Macskassy, S.A., Provost, F., Rosset, S.: Roc confidence bands: An empirical evaluation. In: Proceedings of the 22nd International Conference on Machine Learning (ICML 2005), pp. 537–544 (2005)Google Scholar
  8. 8.
    Macskassy, S.A., Provost, F.: Confidence bands for roc curves: Methods and empirical study. In: Proceedings of the 1st Workshop on ROC Analasis in AI (ROCAI-2004) at ECAI-2004 (2004)Google Scholar
  9. 9.
    Drummond, C., Holte, R.C.: Severe class imbalance: Why better algorithms aren’t the answer. In: Proceedings of the 16th European Conference of Machine Learning, pp. 539–546 (2005)Google Scholar
  10. 10.
    Motulsky, H.: Intuitive Biostatistics. Oxford University Press, Oxford (1995)Google Scholar
  11. 11.
    Tango, T.: Equivalence test and confidence interval for the difference in proportions for the paired-sample design. Statistics in Medicine 17, 891–908 (1998)CrossRefGoogle Scholar
  12. 12.
    Newcombe, R.G.: Improved confidence intervals for the difference between binomial proportions based on paired data. Statistics in Medicine 17, 2635–2650 (1998)CrossRefGoogle Scholar
  13. 13.
    Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI repository of machine learning databases, University of California, Irvine, Dept. of Information and Computer Sciences (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html
  14. 14.
    Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)MATHGoogle Scholar
  15. 15.
    Dietterich, T.G.: Approximate statistical test for comparing supervised classification learning algorithms. Neural Computation 10(7), 1895–1923 (1998)CrossRefGoogle Scholar
  16. 16.
    Newcombe, R.G.: Two-sided confidence intervals for the single proportion: comparison of seven methods. Statistics in Medicine 17, 857–872 (1998)CrossRefGoogle Scholar
  17. 17.
    Everitt, B.S.: The analysis of contingency tables. Chapman-Hall, Boca Raton (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • William Elazmeh
    • 1
  • Nathalie Japkowicz
    • 1
  • Stan Matwin
    • 1
    • 2
  1. 1.School of Information Technology and EngineeringUniversity of OttawaCanada
  2. 2.The Institute of Computer SciencePolish Academy of SciencesPoland

Personalised recommendations