Skip to main content
Log in

Ensemble learning based on random super-reduct and resampling

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

Ensemble learning has been widely used for improving the performance of base classifiers. Diversity among base classifiers is considered as a key issue in ensemble learning. Recently, to promote the diversity of base classifiers, ensemble methods through multi-modal perturbation have been proposed. These methods simultaneously use two or more perturbation techniques when generating base classifiers. In this paper, from the perspective of multi-modal perturbation, we propose an ensemble approach (called ‘E\(\_\)RSRR’) based on random super-reduct and resampling. To generate a set of accurate and diverse base classifiers, E\(\_\)RSRR adopts a new multi-modal perturbation strategy. This strategy combines two perturbation techniques together, that is, resampling and random super-reduct. First, it perturbs the sample space via the resampling technique; Second, it perturbs the feature space via the random super-reduct technique, which is a combination of RSS (random subspace selection) technique and ADEFS (approximate decision entropy-based feature selection) method in rough sets. Experimental results show that E\(\_\)RSRR can provide competitive solutions for ensemble learning.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Abbasi S, Nejatian S, Parvin H, Rezaie V, Bagherifard K (2019) Clustering ensemble selection considering quality and diversity. Artif Intell Rev 52(2):1311–1340

    Google Scholar 

  • Altinçay H (2007) Ensembling evidential \(k\)-nearest neighbor classifiers through multi-modal perturbation. Appl Soft Comput 7(3):1072–1083

    MathSciNet  Google Scholar 

  • Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml

  • Beaubouef T, Petry FE, Arora G (1998) Information-theoretic measures of uncertainty for rough sets and rough relational databases. Inf Sci 109:535–563

    Google Scholar 

  • Breiman L (1996) Bagging predictors. Mach Learn 24(2):123–140

    MATH  Google Scholar 

  • Breiman L (2001) Random forests. Mach Learn 45(1):5–32

    MATH  Google Scholar 

  • Ceci M, Pio G, Kuzmanovski V, Džeroski S (2015) Semi-supervised multi-view learning for gene network reconstruction. PLoS ONE 10(12):e0144031. https://doi.org/10.1371/journal.pone.0144031

    Article  Google Scholar 

  • Chang XJ, Nie FP, Yang Y, Zhang CQ, Huang H (2016) Convex sparse PCA for unsupervised feature learning. ACM Trans Knowl Discov Data 11(1):1–16

    Google Scholar 

  • Dai JH, Han HF, Hu QH, Liu MF (2016) Discrete particle swarm optimization approach for cost sensitive attribute reduction. Knowl Based Syst 102(C):116–126

    Google Scholar 

  • Dai JH, Hu H, Wu WZ, Qian YH, Huang DB (2018) Maximal discernibility paris based approach to attribute reduction in fuzzy rough sets. IEEE Trans Fuzzy Syst 26(4):2174–2187

    Google Scholar 

  • Dasarathy BV (1990) Nearest neighbor norms: NN pattern classification techniques. IEEE Computer Society Press, Los Alamitos, CA

    Google Scholar 

  • Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  • Dietterich TG (1998) Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput 10(7):1895–1923

    Google Scholar 

  • Dietterich TG (2000) An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Mach Learn 40(2):139–157

    Google Scholar 

  • Dietterich TG (2002) Ensemble learning. In: The Handbook of brain theory and neural networks, 2nd edn. Cambridge, MA: MIT Press

  • Feng J, Zhou ZH (2018) AutoEncoder by forest. In: Proceedings of the 32nd AAAI conference on artificial intelligence (AAAI’18), New Orleans, USA, pp 2967–2973

  • Galar M, Fernandez A, Barrenechea E, Bustince H, Herrera F (2012) A review on ensembles for the class imbalance problem: Bagging-, boosting-, and hybrid-based approaches. IEEE Trans Syst Man Cybern Part C (Appl Rev) 42(4):463–484

    Google Scholar 

  • Gao C, Pedrycz W, Miao DQ (2013) Rough subspace-based clustering ensemble for categorical data. Soft Comput 17(9):1643–1658

    MATH  Google Scholar 

  • García-Pedrajas N, Hervás-Martínez C, Ortiz-Boyer D (2005) Cooperative coevolution of artificial neural network ensembles for pattern classification. IEEE Trans Evolut Comput 9(3):271–302

    Google Scholar 

  • Guo YW, Jiao LC, Wang S, Wang S, Liu F, Rong KX, Xiong T (2015) A novel dynamic rough subspace based selective ensemble. Pattern Recognit 48(5):1638–1652

    Google Scholar 

  • Ho TK (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20(8):832–844

    Google Scholar 

  • Hu QH, Yu DR, Xie ZX, Li XD (2007) EROS: ensemble rough subspaces. Pattern Recognit 40(12):3728–3739

    MATH  Google Scholar 

  • Hu QH, Pedrycz W, Yu DR, Lang J (2010) Selecting discrete and continuous features based on neighborhood decision error minimization. IEEE Trans Syst Man Cybern Part B 40(1):137–150

    Google Scholar 

  • Hu QH, Che XJ, Zhang L, Zhang D, Guo MZ, Yu DR (2012) Rank entropy based decision trees for monotonic classification. IEEE Trans Knowl Data Eng 24(11):2052–2064

    Google Scholar 

  • Jensen R, Shen Q (2008) Computational intelligence and feature selection: rough and fuzzy approaches. IEEE Press and Wiley, Hoboken

    Google Scholar 

  • Jiang F, Sui YF, Zhou L (2015) A relative decision entropy-based feature selection approach. Pattern Recognit 48(7):2151–2163

    MATH  Google Scholar 

  • Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51(2):181–207

    MATH  Google Scholar 

  • Li F, Miao DQ, Pedrycz W (2017) Granular multi-label feature selection based on mutual information. Pattern Recognit 67:410–423

    Google Scholar 

  • Li H, Wang XS, Ding SF (2018) Research and development of neural network ensembles: a survey. Artif Intell Rev 49(4):455–479

    Google Scholar 

  • Liang JY, Shi ZZ, Li DY, Wierman MJ (2006) Information entropy, rough entropy and knowledge granularity in incomplete information systems. Int J Gen Syst 35(6):641–654

    MathSciNet  MATH  Google Scholar 

  • Liang JY, Wang JH, Qian YH (2009) A new measure of uncertainty based on knowledge granulation for rough sets. Inf Sci 179(4):458–470

    MathSciNet  MATH  Google Scholar 

  • Liu FT, Ting KM, Yu Y, Zhou ZH (2008) Spectrum of variable-random trees. J Artif Intell Res 32(1):355–384

    MATH  Google Scholar 

  • Maji P, Pal SK (2010) Feature selection using \(f\)-information measures in fuzzy approximation spaces. IEEE Trans Knowl Data Eng 22(6):854–867

    Google Scholar 

  • Marqués AI, García V, Sánchez JS (2012) Two-level classifier ensembles for credit risk assessment. Expert Syst Appl 39(12):10916–10922

    Google Scholar 

  • Miao DQ, Hu GR (1999) An heuristic algorithm of knowledge reduction. J Comput Res Dev 36(6):681–684

    Google Scholar 

  • Min F, Hu QH, Zhu W (2014) Feature selection with test cost constraint. Int J Approx Reason 55(1):167–179

    MathSciNet  MATH  Google Scholar 

  • Pal SK, Shankar BU, Mitra P (2005) Granular computing, rough entropy and object extraction. Pattern Recognit Lett 26(16):2509–2517

    Google Scholar 

  • Pawlak Z (1982) Rough sets. Int J Comput Inf Sci 11(5):341–356

    MATH  Google Scholar 

  • Pawlak Z (1991) Rough sets: theoretical aspects of reasoning about data. Kluwer Academic Publishers, Dordrecht

    MATH  Google Scholar 

  • Pedrycz W, Vukovich G (2002) Feature analysis through information granulation and fuzzy sets. Pattern Recognit 35(4):825–834

    MATH  Google Scholar 

  • Pietruczuk L, Rutkowski L, Jaworski M, Duda P (2017) How to adjust an ensemble size in stream data mining? Inf Sci 381:46–54

    MathSciNet  MATH  Google Scholar 

  • Pio G, Malerba D, D’Elia D, Ceci M (2014) Integrating microRNA target predictions for the discovery of gene regulatory networks: a semi-supervised ensemble learning approach. BMC Bioinform 15(Suppl 1):S4. https://doi.org/10.1186/1471-2105-15-S1-S4

    Article  Google Scholar 

  • Pio G, Ceci M, Prisciandaro F, Malerba D (2020) Exploiting causality in gene network reconstruction based on graph embedding. Mach Learn 109(6):1231–1279

    MathSciNet  MATH  Google Scholar 

  • Presti LL, Cascia ML (2017) Boosting hankel matrices for face emotion recognition and pain detection. Comput Vis Image Underst 156(C):19–33

    Google Scholar 

  • Qian YH, Liang JY, Wang F (2009) A new method for measuring the uncertainty in incomplete information systems. Int J Uncertain Fuzziness Knowl Based Syst 17(6):855–880

    MathSciNet  MATH  Google Scholar 

  • Qian YH, Wang Q, Cheng HH, Liang JY, Dang CY (2015) Fuzzy-rough feature selection accelerator. Fuzzy Sets Syst 258:61–78

    MathSciNet  MATH  Google Scholar 

  • Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann Publishers, San Francisco

    Google Scholar 

  • Rashidi F, Nejatian S, Parvin H, Rezaie V (2019) Diversity based cluster weighting in cluster ensemble: an information theory approach. Artif Intell Rev 52(2):1341–1368

    Google Scholar 

  • Rodríguez JJ, Kuncheva LI, Alonso CJ (2006) Rotation forest: a new classifier ensemble method. IEEE Trans Pattern Anal Mach Intell 28(10):1619–1630

    Google Scholar 

  • Rokach L (2010) Ensemble-based classifiers. Artif Intell Rev 33(1–2):1–39

    Google Scholar 

  • Santos SGTDC, De Barros RSM (2019) Online AdaBoost-based methods for multiclass problems. Artif Intell Rev. https://doi.org/10.1007/s10462-019-09696-6

    Article  Google Scholar 

  • Schapire RE (1990) The strength of weak learnability. Mach Learn 5(2):197–227

    Google Scholar 

  • Serafino F, Pio G, Ceci M (2018) Ensemble learning for multi-type classification in heterogeneous networks. IEEE Trans Knowl Data Eng 30(12):2326–2339

    Google Scholar 

  • Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27(3):379–423

    MathSciNet  MATH  Google Scholar 

  • Ślȩzak D (2002) Approximate entropy reducts. Fundam Inform 53(3–4):365–390

    MathSciNet  MATH  Google Scholar 

  • Tama BA, Rhee KH (2019) Tree-based classifier ensembles for early detection method of diabetes: an exploratory study. Artif Intell Rev 51(3):355–370

    Google Scholar 

  • Tang EK, Suganthan PN, Yao X (2006) An analysis of diversity measures. Mach Learn 65(1):247–271

    Google Scholar 

  • Wang GY, Yu H, Yang DC (2002) Decision table reduction based on conditional information entropy. Chin J Comput 25(7):759–766

    MathSciNet  Google Scholar 

  • Wang CZ, Shao MW, He Q, Qian YH, Qi YL (2016) Feature subset selection based on fuzzy neighborhood rough sets. Knowl Based Syst 111:173–179

    Google Scholar 

  • Wang Q, Qian YH, Liang XY, Guo Q, Liang JY (2018) Local neighborhood rough set. Knowl Based Syst 153:53–64

    Google Scholar 

  • Witten IH, Frank E, Hall MA (2011) Data mining: practical machine learning tools and techniques, 3rd edn. Morgan Kaufmann Publishers, San Francisco

    Google Scholar 

  • Xu ZY, Liu ZP, Yang BR, Song W (2006) A quick attribute reduction algorithm with complexity of max(\(O(|C||U|),O(|C|^2 |U/C|\))). Chin J Comput 29(3):391–399

    Google Scholar 

  • Yu ZW, Lu Y, Zhang J, You J, Wong HS, Wang YD, Han GQ (2018) Progressive semisupervised learning of multiple classifiers. IEEE Trans Cybern 48(2):689–702

    Google Scholar 

  • Zhang CX, Zhang JS (2008) RotBoost: a technique for combining Rotation Forest and AdaBoost. Pattern Recognit Lett 29:1524–1536

    Google Scholar 

  • Zhang CX, Zhang JS (2010) A variant of Rotation Forest for constructing ensemble classifiers. Pattern Anal Appl 13(1):59–77

    MathSciNet  Google Scholar 

  • Zhao HB, Jiang F, Wang CP (2012) An approximation decision entropy based decision tree algorithm and its application in intrusion detection. In: Proceedings of the 6th international conference on rough set and knowledge technology (RSKT2012), Chengdu, China, pp 101–106

  • Zhou ZH, Yu Y (2005) Ensembling local learners through multimodal perturbation. IEEE Trans Syst Man Cybern Part B 35(4):725–735

    Google Scholar 

  • Zhou ZH, Yu Y (2005) Adapt bagging to nearest neighbor classifiers. J Comput Sci Technol 20(1):48–54

    Google Scholar 

  • Zhou ZH (2012) Ensemble methods: foundations and algorithms. Chapman & Hall/CRC, Boca Raton

    Google Scholar 

  • Zhu PF, Hu QH, Zuo WM, Yang M (2014) Multi-granularity distance metric learning via neighborhood granule margin maximization. Inf Sci 282:321–331

    Google Scholar 

Download references

Acknowledgements

This work is supported by the National Natural Science Foundation of China (Grant Nos. 61973180, 61773384, U1806201, 61671261), and the Natural Science Foundation of Shandong Province, China (Grant No. ZR2018MF007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junwei Du.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, F., Yu, X., Zhao, H. et al. Ensemble learning based on random super-reduct and resampling. Artif Intell Rev 54, 3115–3140 (2021). https://doi.org/10.1007/s10462-020-09922-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-020-09922-6

Keywords

Navigation