Advertisement

Knowledge and Information Systems

, Volume 45, Issue 1, pp 247–270 | Cite as

Class imbalance revisited: a new experimental setup to assess the performance of treatment methods

  • Ronaldo C. PratiEmail author
  • Gustavo E. A. P. A. Batista
  • Diego F. Silva
Regular Paper

Abstract

In the last decade, class imbalance has attracted a huge amount of attention from researchers and practitioners. Class imbalance is ubiquitous in Machine Learning, Data Mining and Pattern Recognition applications; therefore, these research communities have responded to such interest with literally dozens of methods and techniques. Surprisingly, there are still many fundamental open-ended questions such as “Are all learning paradigms equally affected by class imbalance?”, “What is the expected performance loss for different imbalance degrees?” and “How much of the performance losses can be recovered by the treatment methods?”. In this paper, we propose a simple experimental design to assess the performance of class imbalance treatment methods. This experimental setup uses real data set with artificially modified class distributions to evaluate classifiers in a wide range of class imbalance. We apply such experimental design in a large-scale experimental evaluation with 22 data set and seven learning algorithms from different paradigms. We also propose a statistical procedure aimed to evaluate the relative degradation and recoveries, based on confidence intervals. This procedure allows a simple yet insightful visualization of the results, as well as provide the basis for drawing statistical conclusions. Our results indicate that the expected performance loss, as a percentage of the performance obtained with the balanced distribution, is quite modest (below 5 %) for the most balanced distributions up to 10 % of minority examples. However, the loss tends to increase quickly for higher degrees of class imbalance, reaching 20 % for 1 % of minority class examples. Support Vector Machine is the classifier paradigm that is less affected by class imbalance, being almost insensitive to all but the most imbalanced distributions. Finally, we show that the treatment methods only partially recover the performance losses. On average, typically, about 30 % or less of the performance that was lost due to class imbalance was recovered by these methods.

Keywords

Class imbalance Experimental setup Sampling methods 

Notes

Acknowledgments

We thank the anonymous reviewers for their comments on the draft of this paper. We also thank Nitesh Chawla for providing the Microcalcifications in Mammography data set. This work was funded by FAPESP award 2012/07295-3.

References

  1. 1.
    Batista GEAPA, Prati RC, Monard MC (2004) A study of the behavior of several methods for balancing machine learning training data. SIGKDD Explor 6(1):20–29CrossRefGoogle Scholar
  2. 2.
    Bennett BM (1965) Confidence limits for a ratio using Wilcoxon’s signed rank test. Biometics 21(1):231–234CrossRefGoogle Scholar
  3. 3.
    Berrar D, Lozano JA (2013) Significance tests or confidence intervals: which are preferable for the comparison of classifiers?. J Exp Theor Artif Intell 25(2):189–206. http://www.ingentaconnect.com/content/tandf/teta/2013/00000025/00000002/art00003
  4. 4.
    Borgelt C (2012) Christian borgelt web page. http://www.borgelt.net/
  5. 5.
    Chang C-C, Lin C-J (2012) Libsvm—a library for support vector machines. http://www.csie.ntu.edu.tw/cjlin/libsvm/
  6. 6.
    Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 16:321–357zbMATHGoogle Scholar
  7. 7.
    Cieslak D, Chawla N (2008) Analyzing pets on imbalanced datasets when training and testing class distributions differ. In: Pacific-Asia conference on advances in knowledge discovery and data mining, pp 519–526Google Scholar
  8. 8.
    Clark P, Boswell R (1991) Rule induction with CN2: some recent improvements. In: European working session on machine learning, pp 151–163Google Scholar
  9. 9.
    Cohen G, Hilario M, Sax H, Hugonnet S, Geissbhler A (2006) Learning from imbalanced data in surveillance of nosocomial infection. Artif Intell Med 37(1):7–18CrossRefGoogle Scholar
  10. 10.
    Cohen WW (1995) Fast effective rule induction. In: International conference on machine learning. Morgan Kaufmann, Los Altos, CA, pp 115–123Google Scholar
  11. 11.
    Domingos P (1999) Metacost: a general method for making classifiers cost-sensitive. In: Fayyad UM, Chaudhuri S, Madigan D (eds) ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 155–164Google Scholar
  12. 12.
    Fawcett T (2006) An introduction to ROC analysis. Pattern Recognit Lett 27(8):861–874MathSciNetCrossRefGoogle Scholar
  13. 13.
    Foody GM (2009) Classification accuracy comparison: Hypothesis tests and the use of confidence intervals in evaluations of difference, equivalence and non-inferiority. Remote Sens Environ 113(8):1658–1663. http://www.sciencedirect.com/science/article/pii/S0034425709000923
  14. 14.
    Frank A, Asuncion A (2010) UCI machine learning repository. http://archive.ics.uci.edu/ml
  15. 15.
    Froemke C, Hothorn L, Schneider M (2012) Confidence intervals for the ratio of locations and for the ratio of scales of two paired samples. Technical report, The Comprehensive R Archive Network. http://cran.r-project.org/web/packages/pairedCI/index.html
  16. 16.
    Galar M, Fernandez A, Barrenechea E, Bustince H, Herrera F (2012) A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Trans Syst Man Cybern Part C 42(4):463–484CrossRefGoogle Scholar
  17. 17.
    Guo H, Viktor HL (2004) Learning from imbalanced data sets with boosting and data generation: the databoost-im approach. SIGKDD Explor 6(1):30–39CrossRefGoogle Scholar
  18. 18.
    Han H, Wang W-Y, Mao B-H (2005) Borderline-smote: a new over-sampling method in imbalanced data sets learning. In: International conference on advances in intelligent computing. Lecture notes in computer science. Springer, Berlin, pp 878–887. doi: 10.1007/11538059_91
  19. 19.
    He H, Bai Y, Garcia E, Li S (2008) Adasyn: adaptive synthetic sampling approach for imbalanced learning. In: IEEE international joint conference on neural networks, pp 1322–1328Google Scholar
  20. 20.
    He H, Garcia EA (2009) Learning from imbalanced data. IEEE Trans Knowl Data Eng 21(9):1263–1284CrossRefGoogle Scholar
  21. 21.
    Japkowicz N, Stephen S (2002) The class imbalance problem: a systematic study. Intell Data Anal 6(5):429–449zbMATHGoogle Scholar
  22. 22.
    Khoshgoftaar TM, Seiffert C, Hulse JV, Napolitano A, Folleco A (2007) Learning with limited minority class data. In: International conference on machine learning and applications, pp 348–353Google Scholar
  23. 23.
    Kubat M, Holte RC, Matwin S (1998) Machine learning for the detection of oil spills in satellite radar images. Mach Learn 30(2–3):195–215CrossRefGoogle Scholar
  24. 24.
    Liu X-Y, Wu J, Zhou Z-H (2006) Exploratory under-sampling for class-imbalance learning. In: IEEE international conference on data mining, pp 965–969Google Scholar
  25. 25.
    Liu X-Y, Zhou Z-H (2006) The influence of class imbalance on cost-sensitive learning: an empirical study. In: ‘ICDM’, IEEE Computer Society, pp 970–974Google Scholar
  26. 26.
    Michie D, Spiegelhalter DJ, Taylor CC (1994) Machine learning, neural and statistical classification. Ellis Horwood, New yorkGoogle Scholar
  27. 27.
    Phua C, Alahakoon D, Lee V (2004) Minority report in fraud detection: classification of skewed data. SIGKDD Explor 6(1):50–59CrossRefGoogle Scholar
  28. 28.
    Prati RC, Batista GEAPA, Monard MC (2011) A survey on graphical methods for classification predictive performance evaluation. IEEE Trans Knowl Data Eng 23(11):1601–1618CrossRefGoogle Scholar
  29. 29.
    Prati RC, Batista GEAPA, Silva DF (2013) Paper website. http://sites.labic.icmc.usp.br/ClassImbalanceRevisited/
  30. 30.
    Provost FJ, Fawcett T, Kohavi R (1998) The case against accuracy estimation for comparing induction algorithms. In: Shavlik JW (ed) International conference on machine learning. Morgan Kaufmann, Los Altos, CA, pp 445–453Google Scholar
  31. 31.
    Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann Publishers Inc, Los Altos, CAGoogle Scholar
  32. 32.
    Wallace B, Small K, Brodley C, Trikalinos T (2011) Class imbalance, redux. In: IEEE international conference on data mining, pp 754–763Google Scholar
  33. 33.
    Wang X, Matwin S, Japkowicz N, Liu X (2013) Cost-sensitive boosting algorithms for imbalanced multi-instance datasets. In: Zaïane OR, Zilles S (eds) Canadian conference on artificial intelligence, vol 7884 of lecture notes in computer science. Springer, Berlin, pp 174–186Google Scholar
  34. 34.
    Weiss GM (2004) Mining with rarity: a unifying framework. SIGKDD Explor 6(1):7–19CrossRefGoogle Scholar
  35. 35.
    Weiss GM, McCarthy K, Zabar B (2007) Cost-sensitive learning vs. sampling: which is best for handling unbalanced classes with unequal error costs? In: IEEE international conference on data mining, pp 35–41Google Scholar
  36. 36.
    Weiss GM, Provost F (2003) Learning when training data are costly: the effect of class distribution on tree induction. J Artif Intell Res 19:315–354zbMATHGoogle Scholar
  37. 37.
    Wu G, Chang EY (2003) Class-boundary alignment for imbalanced dataset learning. In: Workshop on learning from imbalanced Datasets in international conference on machine learningGoogle Scholar

Copyright information

© Springer-Verlag London 2014

Authors and Affiliations

  • Ronaldo C. Prati
    • 1
    Email author
  • Gustavo E. A. P. A. Batista
    • 2
  • Diego F. Silva
    • 2
  1. 1.Centro de Matemática, Computação e CogniçãoUniversidade Federal do ABCSanto AndréBrazil
  2. 2.Instituto de Ciências Matemáticas e de ComputaçãoUniversidade de São PauloSão CarlosBrazil

Personalised recommendations