Skip to main content
Log in

Enhancing techniques for learning decision trees from imbalanced data

  • Regular Article
  • Published:
Advances in Data Analysis and Classification Aims and scope Submit manuscript

Abstract

Several machine learning techniques assume that the number of objects in considered classes is approximately similar. Nevertheless, in real-world applications, the class of interest to be studied is generally scarce. The data imbalance status may allow high global accuracy through most standard learning algorithms, but it poses a real challenge when considering the minority class accuracy. To deal with this issue, we introduce in this paper a novel adaptation of the decision tree algorithm to imbalanced data situations. A new asymmetric entropy measure is proposed. It adjusts the most uncertain class distribution to the a priori class distribution and involves it in the node splitting-process. Unlike most competitive split criteria, which include only the maximum uncertainty vector in their formula, the proposed entropy is customizable with an adjustable concavity to better comply with the system expectations. The experimental results across thirty-five differently class-imbalanced data-sets show significant improvements over various split criteria adapted for imbalanced situations. Furthermore, being combined with sampling strategies and based-ensemble methods, our entropy proves significant enhancements on the minority class prediction, along with a good handling of the data difficulties related to the class imbalance problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. The data-set Ids are in \(\{1,3,6{-}8,11{-}15,17,18,21{-}27,29,31{-}35\}.\)

  2. The data-set Ids are in \(\{2,5,9,10,20\}.\)

  3. The data-set Ids are in \(\{4,16,19,28\}.\)

  4. The data-set Ids in descending order of the percentage of borderline minority-class examples are in \(\{31, 8, 33, 5, 32, 6, 35, 16, 28, 22, 1, 13, 25\}.\)

  5. The data-set Ids in descending order of the percentage of rare minority class examples are in \(\{29, 25, 22, 35, 33, 16, 5, 28, 6, 26\}.\)

  6. The data-set Ids in descending order of outlier presence are in \(\{23, 34, 29, 22\}.\)

  7. The data-set Ids are in \(\{27, 20, 19, 7, 3\}.\)

  8. Referenced in Sect. 5.1.

References

  • Alcala-Fdez J, Fernandez A, Luengo J, Derrac J, Garcia S (2011) KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. Multiple-Valued Logic Soft Comput 17(2–3):255–287

    Google Scholar 

  • Batista GEAPA, Prati RC, Monard MC (2004) A study of the behavior of several methods for balancing machine learning training data. SIGKDD Explor 6(1):20–29. https://doi.org/10.1145/1007730.1007735

    Article  Google Scholar 

  • Beyan C, Fisher R (2015) Classifying imbalanced data sets using similarity based hierarchical decomposition. Pattern Recognit 48(5):1653–1672

    Article  Google Scholar 

  • Blagus R, Lusa L (2013) SMOTE for high-dimensional class-imbalanced data. BMC Bioinformatics 14(1):106. https://doi.org/10.1186/1471-2105-14-106

    Article  Google Scholar 

  • Blaszczynski J, Stefanowski J (2015) Neighbourhood sampling in bagging for imbalanced data. Neurocomputing 150:529–542. https://doi.org/10.1016/j.neucom.2014.07.064. http://www.sciencedirect.com/science/article/pii/S0925231214012296

  • Blaszczynski J, Deckert M, Stefanowski J, Wilk S (2010) Integrating selective pre-processing of imbalanced data with ivotes ensemble. In: Szczuka M, Kryszkiewicz M, Ramanna S, Jensen R, Hu Q (eds) Rough sets and current trends in computing. Springer, Berlin, pp 148–157

    Chapter  Google Scholar 

  • Blaszczynski J, Stefanowski J, Idkowiak L (2013) Extending bagging for imbalanced data. In: Burduk R, Jackowski K, Kurzynski M, Wozniak M, Zolnierek A (eds) Proceedings of the 8th international conference on computer recognition systems CORES 2013, Springer International Publishing, Heidelberg, pp 269–278

  • Bosch A, Zisserman A, Munoz X (2007) Image classification using random forests and ferns. In: 11th International conference on computer vision, IEEE, pp 1–8. https://doi.org/10.1109/ICCV.2007.4409066

  • Bradford JP, Kunz C, Kohavi R, Brunk C, Brodley CE (1998) Pruning decision trees with misclassification costs. In: Nedellec C, Rouveirol C (eds) Machine learning: ECML-98. Springer, Berlin, pp 131–136

    Chapter  Google Scholar 

  • Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140. https://doi.org/10.1023/A:1018054314350

    Article  MATH  Google Scholar 

  • Breiman L (2001) Random forests. Mach Learn 45(1):5–32. https://doi.org/10.1023/A:1010933404324

    Article  MATH  Google Scholar 

  • Breiman L, Friedman JH, Olshen RA, Stone CJ (1984) Classification and regression trees. Wadsworth and Brooks, Monterey

    MATH  Google Scholar 

  • Bressoux P (2010) Modélisation statistique appliquée aux sciences sociales. Méthodes en sciences humaines, De Boeck Supérieur. https://doi.org/10.3917/dbu.bress.2010.01. https://www.cairn.info/modelisation-statistique-appliquee-aux-sciences-so--9782804157142.htm

  • Buntine W, Niblett T (1992) A further comparison of splitting rules for decision-tree induction. Mach Learn 8(1):75–85

    Google Scholar 

  • Chaabane I, Guermazi R, Hammami M (2017) Adapted pruning scheme for the framework of imbalanced data-sets. Procedia Comput Sci 112(C):1542–1553

    Article  Google Scholar 

  • Chawla NV (2005) Data mining for imbalanced datasets: an overview. In: Maimon O, Rokach L (eds) Data mining and knowledge discovery handbook. Springer, Boston, pp 853–867. https://doi.org/10.1007/0-387-25465-X_40

    Chapter  Google Scholar 

  • Chawla NV (2003) C4.5 and imbalanced data sets: investigating the effect of sampling method, probabilistic estimate, and decision tree structure. In: Proceedings of the ICML’03 workshop on class imbalances

  • Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP (2002) SMOTE: synthetic minority over-sampling technique. J Artif Intell Res 16:321–357

    Article  MATH  Google Scholar 

  • Chawla NV, Lazarevic A, Hall L, Bowyer K (2003) SMOTEBoost: improving prediction of the minority class in boosting. In: Lavrac N, Gamberger D, Todorovski L, Blockeel H (eds) Knowledge discovery in databases: PKDD 2003, vol 2838. Lecture Notes in Computer Science. Springer, Berlin, pp 107–119

    Chapter  Google Scholar 

  • Chen J, Tsai C, Moon H, Ahn H, Young J, Chen C (2006) Decision threshold adjustment in class prediction. SAR QSAR Environ Res 17(3):337–352. https://doi.org/10.1080/10659360600787700

    Article  Google Scholar 

  • Chen LS, Cai SJ (2015) Neural-network-based resampling method for detecting diabetes mellitus. J Med Biol Eng 35(6):824–832. https://doi.org/10.1007/s40846-015-0093-9

    Article  Google Scholar 

  • Cieslak DA, Hoens TR, Chawla NV, Kegelmeyer WP (2012) Hellinger distance decision trees are robust and skew-insensitive. Data Min Knowl Discov 24(1):136–158

    Article  MathSciNet  MATH  Google Scholar 

  • Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  • Derrac J, Garcia S, Molina D, Herrera F (2011) A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol Comput 1(1):3–18

    Article  Google Scholar 

  • Jf Diez-Pastor, Rodriguez JJ, Garcia-Osorio CI, Kuncheva LI (2015) Diversity techniques improve the performance of the best imbalance learning ensembles. Information Sci 325(C):98–117. https://doi.org/10.1016/j.ins.2015.07.025

    Article  MathSciNet  Google Scholar 

  • Elkan C (2001) The foundations of cost-sensitive learning. In: Proceedings of the 17th international joint conference on artificial intelligence, vol 2. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, IJCAI’01, pp 973–978

  • Freund Y, Schapire RE (1997) A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139. https://doi.org/10.1006/jcss.1997.1504

    Article  MathSciNet  MATH  Google Scholar 

  • Galar M, Fernandez A, Barrenechea E, Bustince H, Herrera F (2012) A review on ensembles for the class imbalance problem: bagging-, boosting-, and hybrid-based approaches. IEEE Trans Syst Man Cybern C Appl Rev 42(4):463–484

    Article  Google Scholar 

  • Galar M, Fernandez A, Barrenechea E, Bustince H, Herrera F (2016) Ordering-based pruning for improving the performance of ensembles of classifiers in the framework of imbalanced datasets. Information Sci 354:178–196

    Article  Google Scholar 

  • Ganganwar V (2012) An overview of classification algorithms for imbalanced datasets. Int J Emerg Technol Adv Eng 2(4):42–47

    Google Scholar 

  • Garcia S, Fernandez A, Luengo J, Herrera F (2010) Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power. Information Sci 180(10):2044–2064. https://doi.org/10.1016/j.ins.2009.12.010. http://www.sciencedirect.com/science/article/pii/S0020025509005404

  • Garcia V, Mollineda RA, Sanchez JS (2009) Pattern recognition and image analysis: 4th Iberian conference, IbPRIA 2009 Povoa de Varzim, Portugal, June 10–12, 2009 Proceedings, Springer Berlin Heidelberg, Berlin, Heidelberg, chap Index of Balanced Accuracy: A Performance Measure for Skewed Class Distributions, pp 441–448

  • Geddes K, Gonnet G (1981–2014) Maplesoft (18.02), a division of Waterloo Maple Inc., Waterloo, Ontario. www.maplesoft.com

  • Geurts P, Ernst D, Wehenkel L (2006) Extremely randomized trees. Mach Learn 63(1):3–42. https://doi.org/10.1007/s10994-006-6226-1

    Article  MATH  Google Scholar 

  • Gu Q, Zhu L, Cai Z (2009) Evaluation measures of the classification performance of imbalanced data sets. In: Cai Z, Li Z, Kang Z, Liu Y (eds) Computational intelligence and intelligent systems, communications in computer and information science, vol 51. Springer, Berlin, pp 461–471

    Chapter  Google Scholar 

  • Guermazi R, Chaabane I, Hammami M (2018) AECID: asymmetric entropy for classifying imbalanced data. Information Sci 467:373–397

    Article  MathSciNet  MATH  Google Scholar 

  • Han H, Wang W, Mao B (2005) Borderline-SMOTE: a new over-sampling method in imbalanced data sets learning. In: Huang DS, Zhang XP, Huang GB (eds) ICIC (1), Springer, Lecture Notes in Computer Science, vol 3644, pp 878–887

  • Hart P (1968) The condensed nearest neighbor rule. IEEE Trans Inf Theory 14:515–516

    Article  Google Scholar 

  • He H, Garcia EA (2009) Learning from imbalanced data. IEEE Trans Knowl Data Eng 21(9):1263–1284. https://doi.org/10.1109/TKDE.2008.239

    Article  Google Scholar 

  • Hettich S, Bay SD (1999) The uci kdd archive. [http://kdd.ics.uci.edu]

  • Hido S, Kashima H, Takahashi Y (2009) Roughly balanced bagging for imbalanced data. Stat Anal Data Min 2(56):412–426. https://doi.org/10.1002/sam.10061

    Article  MathSciNet  Google Scholar 

  • Japkowicz N, Stephen S (2002) The class imbalance problem: a systematic study. Intell Data Anal 6(5):429–449. http://dl.acm.org/citation.cfm?id=1293951.1293954

  • Kang S, Ramamohanarao K (2014) Advances in knowledge discovery and data mining: 18th Pacific-Asia conference, PAKDD 2014, Tainan, Taiwan, May 13–16, 2014. Proceedings, Part I, Springer International Publishing, Cham, chap A Robust Classifier for Imbalanced Datasets, pp 212–223

  • Kraiem MS, Moreno MN (2017) Effectiveness of basic and advanced sampling strategies on the classification of imbalanced data. A comparative study using classical and novel metrics. In: Martinez de Pison FJ, Urraca R, Quintien H, Corchado E (eds) Hybrid artificial intelligent systems, Springer International Publishing, Cham, pp 233–245

  • Krawczyk B, Wozniak M, Schaefer G (2014) Cost-sensitive decision tree ensembles for effective imbalanced classification. Appl Soft Comput 14:554–562. https://doi.org/10.1016/j.asoc.2013.08.014

    Article  Google Scholar 

  • Lallich S, Lenca P, Vaillant B (2007) Construction d’une entropie décentrée pour l’apprentissage supervisé. In: EGC 2007 : 7èmes journées francophones ”Extraction et gestion des connaissances”, Atelier Qualité des Données et des Connaissances, Namur, Belgique, pp 45–54

  • Lango M, Stefanowski J (2018) Multi-class and feature selection extensions of roughly balanced bagging for imbalanced data. J Intell Inf Syst pp 97–127. https://doi.org/10.1007/s10844-017-0446-7

  • Lemaitre G, Nogueira F, Aridas CK (2017) Imbalanced-learn: a python toolbox to tackle the curse of imbalanced datasets in machine learning. J Mach Learn Res 18(17):1–5. http://jmlr.org/papers/v18/16-365.html

  • Lenca P, Lallich S, Do TN, Pham NK (2008) A comparison of different off-centered entropies to deal with class imbalance for decision trees. In: Advances in knowledge discovery and data mining. Springer Berlin Heidelberg, Berlin, Heidelberg, pp 634–643

  • Lenca P, Lallich S, Vaillant B (2010) Construction of an off-centered entropy for the supervised learning of imbalanced classes: some first results. Commun Stat Theory Methods 39(3):493–507

    Article  MathSciNet  MATH  Google Scholar 

  • Liang G (2013) An effective method for imbalanced time series classification: hybrid sampling. In: Cranefield S, Nayak A (eds) AI 2013: Adv Artif Intell. Springer International Publishing, Cham, pp 374–385

    Chapter  Google Scholar 

  • Lin W, Tsai CF, Hu Y, Jhang J (2017) Clustering-based undersampling in class-imbalanced data. Information Sci 409(Supplement C):17–26

    Article  Google Scholar 

  • Ling CX, Sheng VS (2010) Cost-sensitive learning. In: Encyclopedia of machine learning. pp 231–235. https://doi.org/10.1007/978-0-387-30164-8_181

  • Ling CX, Yang Q, Wang J, Zhang S (2004) Decision trees with minimal costs. In: Proceedings of the twenty-first international conference on machine learning. ACM, New York, NY, USA, ICML ’04, pp 69–76

  • Liu W, White A (1994) The importance of attribute selection measures in decision tree induction. Mach Learn 15(1):25–41. https://doi.org/10.1023/A:1022609119415

    Article  Google Scholar 

  • Liu W, Chawla S, Cieslak DA, Chawla NV (2010) A robust decision tree algorithm for imbalanced data sets, pp 766–777

  • Liu XY, Zhou ZH (2013) Imbalanced learning: foundations, algorithms, and applications. Wiley-IEEE Press, chap Ensemble Methods for Class Imbalance Learning, pp 61–82

  • Liu XY, Wu J, Zhou ZH (2009) Exploratory undersampling for class-imbalance learning. IEEE Trans Syst Man Cybern B 39(2):539–550. https://doi.org/10.1109/TSMCB.2008.2007853

    Article  Google Scholar 

  • Lyon R, Brooke J, Knowles J, Stappers B (2014) Hellinger distance trees for imbalanced streams. In: 22nd International conference on pattern recognition. pp 1969–1974. https://doi.org/10.1109/ICPR.2014.344

  • Marcellin S, Zighed DA, Ritschard G (2006a) An asymmetric entropy measure for decision trees. In: 11th Conference on information processing and management of uncertainty in knowledge-based systems. IPMU 2006, pp 1292 – 1299

  • Marcellin S, Zighed DA, Ritschard G (2006) Detection of breast cancer using an asymmetric entropy measure. In: Rizzi A, Vichi M (eds) Computional statistics (COMPSTAT 06), vol XXV. Springer, Heidelberg, pp 975–982

    Google Scholar 

  • Marcellin S, Zighed DA, Ritschard G (2008) Evaluating decision trees grown with asymmetric entropies. In: Foundations of intelligent systems, 17th international symposium, ISMIS 2008, Toronto, Canada, May 20–23, pp 58–67

  • Meng YA, Yu Y, Cupples LA, Farrer LA, Lunetta KL (2009) Performance of random forest when SNPs are in linkage disequilibrium. BMC Bioinformatics 10(1). https://doi.org/10.1186/1471-2105-10-78

  • Napierala K, Stefanowski J (2016) Types of minority class examples and their influence on learning classifiers from imbalanced data. J Intell Inf Syst 46(3):563–597. https://doi.org/10.1007/s10844-015-0368-1

    Article  Google Scholar 

  • Napierala K, Stefanowski J, Wilk S (2010) Learning from imbalanced data in presence of noisy and borderline examples. In: Szczuka M, Kryszkiewicz M, Ramanna S, Jensen R, Hu Q (eds) Rough Sets Current Trends Comput. Springer, Berlin Heidelberg, pp 158–167

    Chapter  Google Scholar 

  • Park Y, Ghosh J (2014) Ensembles of \(({\alpha })\)-trees for imbalanced classification problems. IEEE Trans Knowl Data Eng 26(1):131–143

    Article  Google Scholar 

  • Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay E (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830

    MathSciNet  MATH  Google Scholar 

  • Pham NK, Do TN, Lenca P, Lallich S (2008) Using local node information in decision trees: coupling a local labeling rule with an off-centered entropy. In: Proceedings of the international conference on data mining, July 14–17, 2008, Las Vegas, USA, pp 117–123

  • Provost FJ, Weiss GM (2003) Learning when training data are costly: the effect of class distribution on tree induction. J Artif Intell Res 19:315–354 arXiv:1106.4557

    Article  MATH  Google Scholar 

  • Rayhan F, Ahmed S, Mahbub A, Jani MR, Shatabda S, Farid DM, Rahman CM (2017) MEBoost: mixing estimators with boosting for imbalanced data classification. In: International conference on software, knowledge, information management and applications (SKIMA), vol 11. IEEE, pp 1–6

  • Ritschard G, Zighed DA, Marcellin S (2007) Données déséquilibrées, entropie décentrée et indice d’implication. In: Nouveaux apports théoriques à l’analyse statistique implicative et applications, ASI4, Departament de Matematiques, Universitat Jaume I, pp 315–327

  • Rodriguez-Fdez I, Canosa A, Mucientes M, Bugarin A (2015) STAC: a web platform for the comparison of algorithms using statistical tests. In: 2015 IEEE international conference on fuzzy systems, pp 1–8. https://doi.org/10.1109/FUZZ-IEEE.2015.7337889

  • Ryan Hoens T, Chawla N (2013) Imbalanced learning: foundations, algorithms, and applications. Wiley-IEEE Press, chap Imbalanced Datasets: From Sampling to Classifiers, pp 43–59

  • Saez JA, Luengo J, Stefanowsk J, Herrera F (2015) SMOTE–IPF: addressing the noisy and borderline examples problem in imbalanced classification by a re-sampling method with filtering. Information Sci 291(Supplement C):184–203

    Article  Google Scholar 

  • Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27(379–423):623–656

    Article  MathSciNet  MATH  Google Scholar 

  • Shen A, Tong R, Deng Y (2007) Application of classification models on credit card fraud detection. In: 2007 International conference on service systems and service management. pp 1–4

  • Sheng VS, Ling CX (2006) Thresholding for making classifiers cost-sensitive. In: Proceedings of the 21st national conference on artificial intelligence, vol 1. AAAI Press, pp 476–481

  • Shuo W, Xin Y (2009) Diversity analysis on imbalanced data sets by using ensemble models. IEEE Symp Comput Intell Data Min 2009:324–331

    Google Scholar 

  • Singh A, Liu J, Guttag J (2010) Discretization of continuous ECG based risk metrics using asymmetric and warped entropy measures. In: 2010 Computing in cardiology. pp 473–476

  • Son Lam P, Abdesselam B, Giang HN (2009) Pattern recognition, chap Learning pattern classification tasks with imbalanced data sets, pp 193–208. https://doi.org/10.5772/7544

  • Stefanowski J (2013) Overlapping, rare examples and class decomposition in learning classifiers from imbalanced data. Springer, Berlin, pp 277–306. https://doi.org/10.1007/978-3-642-28699-5_11

    Book  Google Scholar 

  • Stefanowski J (2016) Dealing with data difficulty factors while learning from imbalanced data. Springer International Publishing, Cham, pp 333–363

    Google Scholar 

  • Sun Y, Kamel MS, Wong A, Wang Y (2007) Cost-sensitive boosting for classification of imbalanced data. Pattern Recognit 40(12):3358–3378. https://doi.org/10.1016/j.patcog.2007.04.009. http://www.sciencedirect.com/science/article/pii/S0031320307001835

  • Thai-Nghe N, Gantner Z, Schmidt-Thieme L (2011) A new evaluation measure for learning from imbalanced data. In: The 2011 international joint conference on neural networks (IJCNN). pp 537–542

  • Tomek I (1976) An experiment with the edited nearest-neighbor rule. IEEE Trans Syst Man Cybern SMC–6(6):448–452

    MathSciNet  MATH  Google Scholar 

  • Turney PD (1995) Cost-sensitive classification: empirical evaluation of a hybrid genetic decision tree induction algorithm. J Artif Intell Res 2(1):369–409

    Article  Google Scholar 

  • Vanschoren J, van Rijn JN, Bischl B, Torgo L (2013) Openml: networked science in machine learning. SIGKDD Explor 15(2):49–60. https://doi.org/10.1145/2641190.2641198

    Article  Google Scholar 

  • Weiss GM (2004) Mining with rarity: a unifying framework. SIGKDD Explor 6(1):7–19

    Article  Google Scholar 

  • Weiss GM (2010) The impact of small disjuncts on classifier learning, annals of information systems, vol 8. Springer, Boston, pp 193–226

    Google Scholar 

  • Wilson DL (1972) Asymptotic properties of nearest neighbor rules using edited data. IEEE Trans Syst Man Cybern 2(3):408–421. http://dblp.uni-trier.de/db/journals/tsmc/tsmc2.html#Wilson72

  • Wilson DR, Martinez TR (2000) Reduction techniques for instance-based learning algorithms. Mach Learn 38(3):257–286

    Article  MATH  Google Scholar 

  • Yagci AM, Aytekin T, Gurgen FS (2016) Balanced random forest for imbalanced data streams. In: 24th Signal processing and communication application conference (SIU). pp 1065–1068. https://doi.org/10.1109/SIU.2016.7495927

  • Yen SJ, Lee YS (2009) Cluster-based under-sampling approaches for imbalanced data distributions. Expert Syst Appl 36(3):5718–5727. https://doi.org/10.1016/j.eswa.2008.06.108

    Article  Google Scholar 

  • Yildirim P (2016) Pattern classification with imbalanced and multiclass data for the prediction of albendazole adverse event outcomes. Procedia Comput Sci 83:1013–1018

    Article  Google Scholar 

  • Zadrozny B, Langford J, Abe N (2003) Cost-sensitive learning by cost-proportionate example weighting. In: Proceedings of the third IEEE international conference on data mining. IEEE Computer Society, Washington, DC, USA, ICDM ’03

  • Zighed DA, Ritschard G, Marcellin S (2010) Asymmetric and sample size sensitive entropy measures for supervised learning. In: Ras Z, Tsay L (eds) Advances in intelligent information systems, studies in computational intelligence, vol 265. Springer, Berlin, pp 27–42

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ikram Chaabane.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Details of comparative results

Appendix: Details of comparative results

See Tables 11, 12, 13, 14, 15, 16, 17, 18 and 19.

Table 18 Sensitivity test results for involving data sampling based AECID-DT in ensemble techniques
Table 19 Specificity test results for involving data sampling based AECID-DT in ensemble techniques

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chaabane, I., Guermazi, R. & Hammami, M. Enhancing techniques for learning decision trees from imbalanced data. Adv Data Anal Classif 14, 677–745 (2020). https://doi.org/10.1007/s11634-019-00354-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11634-019-00354-x

Keywords

Navigation