Skip to main content

A Novel Synthetic Minority Oversampling Technique for Imbalanced Data Set Learning

  • Conference paper
Neural Information Processing (ICONIP 2011)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7063))

Included in the following conference series:

Abstract

Imbalanced data sets contain an unequal distribution of data samples among the classes and pose a challenge to the learning algorithms as it becomes hard to learn the minority class concepts. Synthetic oversampling techniques address this problem by creating synthetic minority samples to balance the data set. However, most of these techniques may create wrong synthetic minority samples which fall inside majority regions. In this respect, this paper presents a novel Cluster Based Synthetic Oversampling (CBSO) algorithm. CBSO adopts its basic idea from existing synthetic oversampling techniques and incorporates unsupervised clustering in its synthetic data generation mechanism. CBSO ensures that synthetic samples created via this method always lie inside minority regions and thus, avoids any wrong synthetic sample creation. Simualtion analyses on some real world datasets show the effectiveness of CBSO showing improvements in various assesment metrics such as overall accuracy, F-measure, and G-mean.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Weiss, G.M.: Mining with Rarity: A Unifying Framework. ACM SIGKDD Explorations Newsletter 6(1), 7–19 (2004)

    Article  Google Scholar 

  2. Holte, R.C., Acker, L., Porter, B.W.: Concept Learning and the Problem of Small Disjuncts. In: Proc. Int’l J. Conf. Artificial Intelligence, pp. 813–818 (1989)

    Google Scholar 

  3. Lewis, D., Catlett, J.: Heterogenous Uncertainty Sampling for Supervised Learning. In: Proceedings of the Eleventh International Conference on Machine Learning, pp. 148–156 (1994)

    Google Scholar 

  4. Fawcett, T.E., Provost, F.: Adaptive Fraud Detection. Data Mining and Knowledge Discovery 3(1), 291–316 (1997)

    Article  Google Scholar 

  5. Kubat, M., Holte, R.C., Matwin, S.: Machine Learning for the Detection of Oil Spills in Satellite Radar Images. Machine Learning 30(2/3), 195–215 (1998)

    Article  Google Scholar 

  6. Ling, C.X., Li, C.: Data Mining for Direct Marketing: Problems and Solutions. In: International Conference on Knowledge Discovery & Data Mining (1998)

    Google Scholar 

  7. Japkowicz, N., Myers, C., Gluck, M.: A Novelty Detection Approach to Classification. In: Proceedings of the Fourteenth Joint Conference on Artificial Intelligence, pp. 518–523 (1995)

    Google Scholar 

  8. He, H., Garcia, E.A.: Learning from imbalanced data. IEEE Trans. Knowl. Data Eng. 21(10), 1263–1284 (2009)

    Google Scholar 

  9. Weiss, G. M., Provost, F.: The Effect of Class Distribution on Classifier Learning: An Empirical Study. Technical Report ML- TR-43, Dept. of Computer Science, Rutgers Univ. (2001)

    Google Scholar 

  10. Laurikkala, J.: Improving Identification of Difficult Small Classes by Balancing Class Distribution. In: Proc. Conf. AI in Medicine in Europe: Artificial Intelligence Medicine, pp. 63–66 (2001)

    Google Scholar 

  11. Liu, X.Y., Wu, J., Zhou, Z.H.: Exploratory Under Sampling for Class Imbalance Learning. In: Proc. Int’l. Conf. Data Mining, pp. 965–969 (2006)

    Google Scholar 

  12. Zhang, J., Mani, I.: KNN Approach to Unbalanced Data Distributions: A Case Study Involving Information Extraction. In: Proc. Int’l. Conf. Machine Learning (ICML 2003), Workshop Learning from Imbalanced Data Sets (2003)

    Google Scholar 

  13. Kubat, M., Matwin, S.: Addressing the Curse of Imbalanced Training Sets: One-Sided Selection. In: Proc. Int’l. Conf. Machine Learning, pp. 179–186 (1997)

    Google Scholar 

  14. Batista, G.E.A.P.A., Prati, R.C., Monard, M.C.: A Study of the Behavior of Several Methods for Balancing Machine Learning Training Data. ACM SIGKDD Explorations Newsletter 6(1), 20–29 (2004)

    Article  Google Scholar 

  15. Mease, D., Wyner, A.J., Buja, A.: Boosted Classification Trees and Class Probability/Quantile Estimation. J. Machine Learning Research 8, 409–439 (2007)

    MATH  Google Scholar 

  16. Jo, T., Japkowicz, N.: Class Imbalances versus Small Disjuncts. ACM SIGKDD Explorations Newsletter 6(1), 40–49 (2004)

    Article  Google Scholar 

  17. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: SMOTE: Synthetic Minority Over-Sampling Technique. J. Artificial Intelligence Research 16, 321–357 (2002)

    MATH  Google Scholar 

  18. Han, H., Wang, W.Y., Mao, B.H.: Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning. In: Proc. Int’l. Conf. Intelligent Computing, pp. 878–887 (2005)

    Google Scholar 

  19. He, H., Bai, Y., Garcia, E.A., Li, S.: ADASYN: Adaptive Synthetic Sampling Approach for Imbalanced Learning. In: Proc. Int’l. J. Conf. Neural Networks, pp. 1322–1328 (2008)

    Google Scholar 

  20. Freund, Y., Schapire, R.E.: Experiments with a New Boosting Algorithm. In: Proc. Int’l Conf. Machine Learning, pp. 148–156 (1996)

    Google Scholar 

  21. Chawla, N.V., Lazarevic, A., Hall, L.O., Bowyer, K.W.: SMOTEBoost: Improving Prediction of the Minority Class in Boosting. In: Lavrač, N., Gamberger, D., Todorovski, L., Blockeel, H. (eds.) PKDD 2003. LNCS (LNAI), vol. 2838, pp. 107–119. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  22. Guo, H., Viktor, H.L.: Learning from Imbalanced Data Sets with Boosting and Data Generation: The DataBoost IM Approach. ACM SIGKDD Explorations Newsletter 6(1), 30–39 (2004)

    Article  Google Scholar 

  23. Chen, S., He, H., Garcia, E.A.: RAMOBoost: Ranked Minority Oversampling in Boosting. IEEE Trans. Neural Networks 21(20), 624–1642 (2010)

    Google Scholar 

  24. Japkowicz, N., Stephen, S.: The Class Imbalance Problem: A Systematic Study. Intelligent Data Analysis 6(5), 429–449 (2000)

    MATH  Google Scholar 

  25. Voorhees, E.M.: Implementing Agglomerative Hierarchic Clustering Algorithms for use in Document Retrieval. Information Processing and Management 22(6), 465–476 (1986)

    Article  Google Scholar 

  26. Schutze, H., Silverstein, C.: Projections for Efficient Document Clustering. In: SIGIR 1997: Proceedings of the 20th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Philadelphia, PA, USA (1997)

    Google Scholar 

  27. Quinlan, J.R.: C4.5: programs for machine learning. Morgan Kaufmann, San Francisco (1993)

    Google Scholar 

  28. UCI Machine Learning Repository, http://archive.ics.uci.edu/ml/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Barua, S., Islam, M.M., Murase, K. (2011). A Novel Synthetic Minority Oversampling Technique for Imbalanced Data Set Learning. In: Lu, BL., Zhang, L., Kwok, J. (eds) Neural Information Processing. ICONIP 2011. Lecture Notes in Computer Science, vol 7063. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24958-7_85

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24958-7_85

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24957-0

  • Online ISBN: 978-3-642-24958-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics