Skip to main content

Data Intrinsic Characteristics

  • Chapter
  • First Online:
Learning from Imbalanced Data Sets

Abstract

Although class imbalance is often pointed out as a determinant factor for degradation in classification performance, there are situations in which good performance can be achieve even in the presence of severe class imbalance. The identification of situation where the class imbalance is a complicating factor is an important research question. These situations are often associated to some data intrinsic characteristics. This chapter describes some of these characteristics. Section 10.2 discuss some studies using data complexity measures for categorizing imbalanced datasets. Section 10.3 discuss the relationship between class imbalance and small disjuncts. Section 10.4 analyses the problem of data rarity or lack of data. Section 10.5 discuss the problem of class overlapping, a complicating factor for class imbalance. Section 10.6 discuss the problem of noise in the context of class imbalance. The influence of borderline instances is discussed in Sect. 10.7. Section 10.8 analyses the problem on shifting between training and deployment datasets. Section 10.9 describes problems with imperfect data. Finally, Sect. 10.10 concludes this chapter.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aggarwal, C.C., Philip, S.Y.: A survey of uncertain data algorithms and applications. IEEE Trans. Knowl. Data Eng. 21(5), 609–623 (2009)

    Article  Google Scholar 

  2. Anwar, N., Jones, G., Ganesh, S.: Measurement of data complexity for classification problems with unbalanced data. Stat. Anal. Data Min. ASA Data Sci. J. 7(3), 194–211 (2014)

    Article  MathSciNet  Google Scholar 

  3. Batista, G.E., Prati, R.C., Monard, M.C.: A study of the behavior of several methods for balancing machine learning training data. ACM Sigkdd Explor. Newslett. 6(1), 20–29 (2004)

    Article  Google Scholar 

  4. Batuwita, R., Palade, V.: FSVM-CIL: fuzzy support vector machines for class imbalance learning. IEEE Trans. Fuzzy Syst. 18(3), 558–571 (2010)

    Article  Google Scholar 

  5. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., Vaughan, J.W.: A theory of learning from different domains. Mach. Learn. 79(1–2), 151–175 (2010)

    Article  MathSciNet  Google Scholar 

  6. Błaszczyński, J., Stefanowski, J.: Local data characteristics in learning classifiers from imbalanced data. In: Gawñeda, A.E., Kacprzyk, J., Rutkowski, L., Yen, G.G. (eds.) Advances in Data Analysis with Computational Intelligence Methods, pp. 51–85. Springer, Cham (2018)

    Chapter  Google Scholar 

  7. Borsos, Z., Lemnaru, C., Potolea, R.: Dealing with overlap and imbalance: a new metric and approach. Pattern Anal. Appl. 21(2), 381–395 (2018)

    Article  MathSciNet  Google Scholar 

  8. Bunkhumpornpat, C., Sinapiromsaran, K., Lursinsap, C.: Safe-level-smote: safe-level-synthetic minority over-sampling technique for handling the class imbalanced problem. Adv. Knowl. Disc. Data Min. 5476, 475–482 (2009)

    Article  Google Scholar 

  9. Carvalho, D.R., Freitas, A.A.: A hybrid decision tree/genetic algorithm method for data mining. Inf. Sci. 163(1), 13–35 (2004)

    Article  Google Scholar 

  10. Chawla, N.V., Lazarevic, A., Hall, L.O., Bowyer, K.W.: Smoteboost: improving prediction of the minority class in boosting. In: Proceedings of the Principles of Knowledge Discovery in Databases, PKDD-2003, Cavtat-Dubrovnik, Croatia, pp. 107–119 (2003)

    Chapter  Google Scholar 

  11. Chen, L., Fang, B., Shang, Z., Tang, Y.: Tackling class overlap and imbalance problems in software defect prediction. Softw. Qual. J. 26(1), 97–125 (2018)

    Article  Google Scholar 

  12. Chowdhury, A., Alspector, J.: Data duplication: an imbalance problem? In: ICML’2003 Workshop on Learning from Imbalanced Data Sets (II), Washington, DC (2003)

    Google Scholar 

  13. Cieslak, D.A., Hoens, T.R., Chawla, N.V., Kegelmeyer, W.P.: Hellinger distance decision trees are robust and skew-insensitive. Data Min. Knowl. Disc. 24(1), 136–158 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. Cortes, C., Mohri, M.: Domain adaptation and sample bias correction theory and algorithm for regression. Theor. Comput. Sci. 519, 103–126 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  15. Davis, D., Rahman, M.: Missing value imputation using stratified supervised learning for cardiovascular data. J. Inf. Data Min. 1(2), 1–13 (2016)

    Google Scholar 

  16. Denil, M., Trappenberg, T.P.: Overlap versus imbalance. In: Farzindar, A., Keselj, V. (eds.) 23rd Canadian Conference on Artificial Intelligence (Canadian AI 2010), Ontario. Lecture Notes in Computer Science, vol. 6085, pp. 220–231. Springer (2010)

    Google Scholar 

  17. Elkan, C.: The foundations of cost-sensitive learning. In: International Joint Conference on Artificial Intelligence, Seattle, Washington, pp. 973–978. Lawrence Erlbaum Associates Ltd (2001)

    Google Scholar 

  18. Fawcett, T.: PRIE: a system for generating rulelists to maximize ROC performance. Data Min. Knowl. Disc. 17(2), 207–224 (2008)

    Article  MathSciNet  Google Scholar 

  19. Fernández, A., del Jesus, M.J., Herrera, F.: Hierarchical fuzzy rule based classification systems with genetic rule selection for imbalanced data-sets. Int J. Approx. Reason. 50(3), 561–577 (2009)

    Article  MATH  Google Scholar 

  20. Forman, G., Cohen, I.: Learning from little: comparison of classifiers given little training. Knowledge Discovery in Databases, PKDD 2004, Pisa, pp. 161–172 (2004)

    Google Scholar 

  21. Frénay, B., Verleysen, M.: Classification in the presence of label noise: a survey. IEEE Trans. Neural Netw. Learn. Syst. 25(5), 845–869 (2014)

    Article  Google Scholar 

  22. Friedman, J.H., Kohavi, R., Yun, Y.: Lazy decision trees. In: Association for the Advancement of Artificial Intelligence/Innovative Applications of Artificial Intelligence Conference, vol. 1, pp. 717–724 (1996)

    Google Scholar 

  23. Fürnkranz, J., Gamberger, D., Lavrac, N.: Foundations of rule learning. Springer, London (2012)

    Book  MATH  Google Scholar 

  24. Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2096–2030 (2016)

    MathSciNet  MATH  Google Scholar 

  25. García, V., Mollineda, R.A., Sánchez, J.S.: On the k-NN performance in a challenging scenario of imbalance and overlapping. Pattern Anal. Appl. 11(3–4), 269–280 (2008)

    Article  MathSciNet  Google Scholar 

  26. Gu, X., Ni, T., Wang, H.: New fuzzy support vector machine for the class imbalance problem in medical datasets classification. Sci. World J. 2014, 1–12 (2014)

    Google Scholar 

  27. Guha, S., Rastogi, R., Shim, K.: Cure: an efficient clustering algorithm for large databases. ACM SIGMOD Record 27(2), 73–84 (1998)

    Article  MATH  Google Scholar 

  28. Guo, H., Viktor, H.L.: Learning from imbalanced data sets with boosting and data generation: the DataBoost-IM approach. ACM SIGKDD Explor. Newslett. 6(1), 30–39 (2004)

    Article  Google Scholar 

  29. Han, H., Wang, W.Y., Mao, B.H.: Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning. In: Huang, D.S., Zhang, X.P., Huang, G.B. (eds.) International Conference on Intelligent Computing, ICIC’2005, Hefei, China. Lecture Notes in Computer Science, vol. 3644, pp. 878–887. Springer, Berlin/Heidelberg (2005)

    Google Scholar 

  30. Hart, P.: The condensed nearest neighbor rule. IEEE Trans. Inf. Theory 14(3), 515–516 (1968)

    Article  Google Scholar 

  31. He, H., Bai, Y., Garcia, E.A., Li, S.: Adasyn: adaptive synthetic sampling approach for imbalanced learning. In: IEEE International Joint Conference on Neural Networks (IJCNN 2008), Hong Kong, pp. 1322–1328. IEEE (2008)

    Google Scholar 

  32. Hernández-Orallo, J., Flach, P., Ferri, C.: A unified view of performance metrics: translating threshold choice into expected classification loss. J. Mach. Learn. Res. 13, 2813–2869 (2012)

    MathSciNet  MATH  Google Scholar 

  33. Ho, T.K., Basu, M.: Complexity measures of supervised classification problems. IEEE Trans. Pattern Anal. Mach. Intell. 24(3), 289–300 (2002)

    Article  Google Scholar 

  34. Ho, T., Basu, M., Law, M.: Measures of geometrical complexity in classification problems. In: Basu, M. (ed.) Data Complexity in Pattern Recognition, pp. 1–23. Springer, London (2006)

    Google Scholar 

  35. Holte, R.C.: Very simple classification rules perform well on most commonly used datasets. Mach. Learn. 11(1), 63–90 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  36. Holte, R.C., Acker, L.E., Porter, B.W.: Concept learning and the problem of small disjuncts. In: Proceedings of the 11th International Joint Conference on Artificial Intelligence, IJCAI’89, Detroit, vol. 1, pp. 813–818. Morgan Kaufmann Publishers Inc., San Francisco (1989)

    Google Scholar 

  37. Hühn, J., Hüllermeier, E.: Furia: an algorithm for unordered fuzzy rule induction. Data Min. Knowl. Disc. 19(3), 293–319 (2009)

    Article  MathSciNet  Google Scholar 

  38. Japkowicz, N.: Concept-learning in the presence of between-class and within-class imbalances. In: Stroulia, E., Matwin, S. (eds.) 14th Biennial Conference of the Canadian Society for Computational Studies of Intelligence, AI’2001, Ottawa, pp. 67–77. Springer, Berlin/Heidelberg (2001)

    Google Scholar 

  39. Jo, T., Japkowicz, N.: Class imbalances versus small disjuncts. ACM Sigkdd Explor. Newslett. 6(1), 40–49 (2004)

    Article  Google Scholar 

  40. Kołcz, A., Alspector, J.: Asymmetric missing-data problems: overcoming the lack of negative data in preference ranking. Inf. Retr. 5(1), 5–40 (2002)

    Article  MATH  Google Scholar 

  41. Kubat, M., Matwin, S., et al.: Addressing the curse of imbalanced training sets: one-sided selection. In: International Conference on Machine Learning, Nashville, vol. 97, pp. 179–186 (1997)

    Google Scholar 

  42. Kull, M., Flach, P.: Novel decompositions of proper scoring rules for classification: score adjustment as precursor to calibration. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Porto, pp. 68–85. Springer (2015)

    Google Scholar 

  43. Laurikkala, J.: Improving identification of difficult small classes by balancing class distribution. In: Artificial Intelligence in Medicine, Cascais, pp. 63–66 (2001)

    Chapter  Google Scholar 

  44. Leung, C.K.S.: Mining uncertain data. Wiley Interdiscip. Rev. Data Min. Knowl. Disc. 1(4), 316–329 (2011)

    Article  Google Scholar 

  45. Liu, J., Hu, Q., Yu, D.: A weighted rough set based method developed for class imbalance learning. Inf. Sci. 178(4), 1235–1256 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  46. López, V., Fernández, A., García, S., Palade, V., Herrera, F.: An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics. Inf. Sci. 250, 113–141 (2013)

    Article  Google Scholar 

  47. Luengo, J., Fernández, A., García, S., Herrera, F.: Addressing data complexity for imbalanced data sets: analysis of smote-based oversampling and evolutionary undersampling. Soft Comput. 15(10), 1909–1936 (2011)

    Article  Google Scholar 

  48. Ma, L., Fan, S.: Cure-smote algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests. BMC Bioinf. 18(1), 169 (2017)

    Article  Google Scholar 

  49. Morais, G., Prati, R.C.: Complex network measures for data set characterization. In: 2013 Brazilian Conference on Intelligent Systems (BRACIS), Fortaleza, pp. 12–18. IEEE (2013)

    Google Scholar 

  50. Moreno-Torres, J.G., Raeder, T., Alaiz-RodríGuez, R., Chawla, N.V., Herrera, F.: A unifying view on dataset shift in classification. Pattern Recogn. 45(1), 521–530 (2012)

    Article  Google Scholar 

  51. Napierala, K., Stefanowski, J.: Types of minority class examples and their influence on learning classifiers from imbalanced data. J. Intell. Inf. Syst. 46(3), 563–597 (2016)

    Article  Google Scholar 

  52. Napierała, K., Stefanowski, J., Wilk, S.: Learning from imbalanced data in presence of noisy and borderline examples. In: Kryszkiewicz, M., Jensen, R., Hu, Q., Szczuka, M. (eds.) Rough Sets and Current Trends in Computing, Warsaw, pp. 158–167. Springer, Berlin/Heidelberg (2010)

    Chapter  Google Scholar 

  53. Nguyen, H.M., Cooper, E.W., Kamei, K.: Borderline over-sampling for imbalanced data classification. Int. J. Knowl. Eng. Soft Data Paradigms 3(1), 4–21 (2011)

    Article  Google Scholar 

  54. Norinder, U., Boyer, S.: Binary classification of imbalanced datasets using conformal prediction. J. Mol. Graph. Model. 72, 256–265 (2017)

    Article  Google Scholar 

  55. Oh, S.: A new dataset evaluation method based on category overlap. Comput. Biol. Med. 41(2), 115–122 (2011)

    Article  Google Scholar 

  56. Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2011)

    Article  Google Scholar 

  57. Parsons, S.: Current approaches to handling imperfect information in data and knowledge bases. IEEE Trans. Knowl. Data Eng. 8(3), 353–372 (1996)

    Article  Google Scholar 

  58. Pearson, R.K.: Mining Imperfect Data: Dealing with Contamination and Incomplete Records, vol. 93. SIAM, Philadelphia (2005)

    Book  MATH  Google Scholar 

  59. Prati, R.C., Flach, P.A.: Roccer: an algorithm for rule learning based on ROC analysis. In: International Joint Conference on Artificial Intelligence, Edinburgh, pp. 823–828 (2005)

    Google Scholar 

  60. Prati, R.C., Batista, G., Monard, M.C., et al.: Class imbalances versus class overlapping: an analysis of a learning system behavior. In: 4th Mexican International Conference on Artificial Intelligence, MICAI’2004. Lecture Notes in Computer Science, Mexico City, vol. 2972, pp. 312–321. Springer (2004)

    Google Scholar 

  61. Prati, R.C., Batista, G.E.A.P.A., Monard, M.C.: Learning with class skews and small disjuncts. In: 17th Brazilian Symposium on Artificial Intelligence, SBIA’2004, São Luis. Lecture Notes in Computer Science, vol. 3171, pp. 296–306. Springer (2004)

    Google Scholar 

  62. Pruengkarn, R., Wong, K.W., Fung, C.C.: Data cleaning using complementary fuzzy support vector machine technique. In: International Conference on Neural Information Processing, Barcelona, pp. 160–167. Springer(2016)

    Chapter  Google Scholar 

  63. Pruengkarn, R., Wong, K.W., Fung, C.C.: Imbalanced data classification using complementary fuzzy support vector machine techniques and smote. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff (2017)

    Google Scholar 

  64. Radwan, A.M., Cataltepe, Z.: Improving performance prediction on education data with noise and class imbalance. Intell. Autom. Soft Comput. 1–8 (2017). https://doi.org/10.1080/10798587.2017.1337673

  65. Raudys, S.J., Jain, A.K., et al.: Small sample size effects in statistical pattern recognition: recommendations for practitioners. IEEE Trans. Pattern Anal. Mach. Intell. 13(3), 252–264 (1991)

    Article  Google Scholar 

  66. Rivera, W.A.: Noise reduction a priori synthetic over-sampling for class imbalanced data sets. Inf. Sci. 408, 146–161 (2017)

    Article  Google Scholar 

  67. Schubert, E., Koos, A., Emrich, T., Züfle, A., Schmid, K.A., Zimek, A.: A framework for clustering uncertain data. Proc. VLDB Endow. 8(12), 1976–1979 (2015). Waikoloa, Hawai

    Google Scholar 

  68. Seiffert, C., Khoshgoftaar, T.M., Van Hulse, J., Napolitano, A.: Rusboost: a hybrid approach to alleviating class imbalance. IEEE Trans. Syst. Man Cybern. Part A Syst. Humans 40(1), 185–197 (2010)

    Article  Google Scholar 

  69. Seiffert, C., Khoshgoftaar, T.M., Van Hulse, J., Folleco, A.: An empirical study of the classification performance of learners on imbalanced and noisy software quality data. Inf. Sci. 259, 571–595 (2014)

    Article  Google Scholar 

  70. Shafer, G., Vovk, V.: A tutorial on conformal prediction. J. Mach. Learn. Res. 9, 371–421 (2008)

    MathSciNet  MATH  Google Scholar 

  71. Sim, J., Lee, J.S., Kwon, O.: Missing values and optimal selection of an imputation method and classification algorithm to improve the accuracy of ubiquitous computing applications. Math. Prob. Eng. Art. ID. 538613, 1–14 (2015)

    Google Scholar 

  72. Singh, S.: Multiresolution estimates of classification complexity. IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1534–1539 (2003)

    Article  Google Scholar 

  73. Smith, M.R., Martinez, T., Giraud-Carrier, C.: An instance level analysis of data complexity. Mach. Learn. 95(2), 225–256 (2014)

    Article  MathSciNet  Google Scholar 

  74. Sowah, R.A., Agebure, M.A., Mills, G.A., Koumadi, K.M., Fiawoo, S.Y.: New cluster undersampling technique for class imbalance learning. Int. J. Mach. Learn. Comput. 6(3), 205 (2016)

    Article  Google Scholar 

  75. Stefanowski, J., Wilk, S.: Improving rule based classifiers induced by MODLEM by selective pre-processing of imbalanced data. In: Proceedings of the RSKD Workshop at ECML/PKDD, Warsaw, pp. 54–65 (2007)

    Google Scholar 

  76. Storkey, A.: When training and test sets are different: characterising learning transfer, chap. 1. In: Lawrence, C.S.S. (ed.) Dataset Shift in Machine Learning, pp. 3–28. MIT Press, Cambridge (2009)

    Google Scholar 

  77. Sugiyama, M., Müller, K.R.: Input-dependent estimation of generalization error under covariate shift. Stat. Decis. 23(4), 249–279 (2005)

    MathSciNet  MATH  Google Scholar 

  78. Sun, J., Carlsson, L., Ahlberg, E., Norinder, U., Engkvist, O., Chen, H.: Applying mondrian cross-conformal prediction to estimate prediction confidence on large imbalanced bioactivity data sets. J. Chem. Inf. Model. 57(7), 1591–1598 (2017)

    Article  Google Scholar 

  79. Takum, J., Bunkhumpornpat, C.: Parameter-free imputation for imbalance datasets. In: International Conference on Asian Digital Libraries, Chiang Mai, pp. 260–267. Springer (2014)

    Google Scholar 

  80. Tomek, I.: Two modifications of CNN. IEEE Trans. Syst. Man Cybern. 6, 769–772 (1976)

    MathSciNet  MATH  Google Scholar 

  81. Van Hulse, J., Khoshgoftaar, T.: Knowledge discovery from imbalanced and noisy data. Data Knowl. Eng. 68(12), 1513–1542 (2009)

    Article  Google Scholar 

  82. Van Hulse, J., Khoshgoftaar, T.M., Napolitano, A.: Evaluating the impact of data quality on sampling. J. Inf. Knowl. Manag. 10(03), 225–245 (2011)

    Article  Google Scholar 

  83. Vovk, V.: Cross-conformal predictors. Ann. Math. Artif. Intell. 74(1–2), 9–28 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  84. Wang, S., Yao, X.: Diversity analysis on imbalanced data sets by using ensemble models. In: IEEE Symposium on Computational Intelligence and Data Mining, CIDM’09, Nashville, pp. 324–331. IEEE (2009)

    Google Scholar 

  85. Wasikowski, M., Chen, X.W.: Combating the small sample class imbalance problem using feature selection. IEEE Trans. Knowl. Data Eng. 22(10), 1388–1400 (2010)

    Article  Google Scholar 

  86. Weiss, G.M.: Learning with rare cases and small disjuncts. In: Proceedings of the Twelfth International Conference on Machine Learning, Tahoe City, pp. 558–565. Morgan Kaufmann (1995)

    Google Scholar 

  87. Weiss, G.M.: Mining with rarity: a unifying framework. ACM Sigkdd Explor. Newslett. 6(1), 7–19 (2004)

    Article  Google Scholar 

  88. Weiss, G.M.: The impact of small disjuncts on classifier learning. In: Stahlbock, R., Crone, S.F., Lessmann, S. (eds.) Data Mining – Special Issue in Annals of Information Systems. Annals of Information Systems, vol. 8, pp. 193–226. Springer, Boston (2010)

    Google Scholar 

  89. Weiss, G.M., Provost, F.: Learning when training data are costly: the effect of class distribution on tree induction. J. Artif. Intell. Res. 19, 315–354 (2003)

    Article  MATH  Google Scholar 

  90. Weng, C.G., Poon, J.: A data complexity analysis on imbalanced datasets and an alternative imbalance recovering strategy. In: Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence, pp. 270–276. IEEE Computer Society, Hong Kong (2006)

    Google Scholar 

  91. Xu, M., Zhou, Z.H.: Incomplete label distribution learning. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, pp. 3175–3181. AAAI Press (2017)

    Google Scholar 

  92. Xue, J.C., Weiss, G.M.: Quantification and semi-supervised classification methods for handling changes in class distribution. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Paris, pp. 897–906. ACM (2009)

    Google Scholar 

  93. Zadrozny, B.: Learning and evaluating classifiers under sample selection bias. In: Proceedings of the Twenty-First International Conference on Machine Learning, Banff, p. 114. ACM (2004)

    Google Scholar 

  94. Zhu, X., Wu, X.: Class noise vs. attribute noise: a quantitative study. Artif. Intell. Rev. 22(3), 177–210 (2004)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Fernández, A., García, S., Galar, M., Prati, R.C., Krawczyk, B., Herrera, F. (2018). Data Intrinsic Characteristics. In: Learning from Imbalanced Data Sets. Springer, Cham. https://doi.org/10.1007/978-3-319-98074-4_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-98074-4_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-98073-7

  • Online ISBN: 978-3-319-98074-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics