Applied Intelligence

, Volume 47, Issue 2, pp 570–583 | Cite as

Creating diversity in ensembles using synthetic neighborhoods of training samples

  • Zhi Chen
  • Tao LinEmail author
  • Rui Chen
  • Yingtao Xie
  • Hongyan Xu


Diversity among base classifiers is known to be a key driver for the construction of an effective ensemble classifier. Several methods have been proposed to construct diverse base classifiers using artificially generated training samples. However, in these methods, diversity is often obtained at the expense of the accuracy of base classifiers. Inspired by the localized generalization error model a new sample generation method is proposed in this study. When preparing different training sets for base classifiers, the proposed method generates samples located within limited neighborhoods of the corresponding training samples. The generated samples are different with the original training samples but they also expand different parts of the original training data. Learning these datasets can result in a set of base classifiers that are accurate in different regions of the input space as well as maintaining appropriate diversity. Experiments performed on 26 benchmark datasets showed that: (1) our proposed method significantly outperformed some state-of-the-art ensemble methods in term of the classification accuracy; (2) our proposed method was significantly more efficient that other sample generation based ensemble methods.


Classifier ensemble Diversity Generalization ability Localized generalization error model Sample generation 


  1. 1.
    Brown G, Wyatt J, Harris R, Yao X (2005) Diversity creation methods: a survey and categorisation. Inf Fusion 6(1):5–20CrossRefGoogle Scholar
  2. 2.
    Bi Y (2012) The impact of diversity on the accuracy of evidential classifier ensembles. Int J Approx Reason 53(4):584–607MathSciNetCrossRefGoogle Scholar
  3. 3.
    Sun B, Chen H, Wang J (2015) An empirical margin explanation for the effectiveness of DECORATE ensemble learning algorithm. Knowl-Based Syst 78:1–12CrossRefGoogle Scholar
  4. 4.
    Tsakonas A (2014) An analysis of accuracy-diversity trade-off for hybrid combined system with multiobjective predictor selection. Appl Intell 40(4):710–723CrossRefGoogle Scholar
  5. 5.
    Kuncheva LI (2001) Combining classifiers: soft computing solutions. Pattern recognition: from classical to modern approaches, pp 427–451Google Scholar
  6. 6.
    Melville P, Mooney RJ (2005) Creating diversity in ensembles using artificial data. Inf Fusion 6(1):99–111CrossRefGoogle Scholar
  7. 7.
    Ho T (1998) The random subspace method for constructing decision forests. IEEE Trans Pattern Anal Mach Intell 20(8):832–844CrossRefGoogle Scholar
  8. 8.
    Akhand MA, Murase K (2012) Ensembles of neural networks based on the alteration of input feature values. Int J Neural Syst 22(1):77–87CrossRefGoogle Scholar
  9. 9.
    Akhand MAH, Islam MM, Murase K (2009) A comparative study of data sampling techniques for constructing neural network ensembles. Int J Neural Syst 19(02):67–89Google Scholar
  10. 10.
    Britto AS, Sabourin R, Oliveira LES (2014) Dynamic selection of classifiers—a comprehensive review. Pattern Recogn 47(11):3665–3680CrossRefGoogle Scholar
  11. 11.
    Yeung DS, Ng WW, Wang D, Tsang EC, et al (2007) Localized generalization error model and its application to architecture selection for radial basis function neural network. IEEE Trans Neural Netw 18(5):1294–1305CrossRefGoogle Scholar
  12. 12.
    Ng WWY, Dorado A, Yeung DS, Pedrycz W, et al (2007) Image classification with the use of radial basis function neural networks and the minimization of the localized generalization error. Pattern Recogn 40 (1):19–32CrossRefzbMATHGoogle Scholar
  13. 13.
    Ng WWY, Yeung DS, Firth M, Tsang ECC, et al (2008) Feature selection using localized generalization error for supervised classification problems using RBFNN. Pattern Recogn 41(12):3706–3719CrossRefzbMATHGoogle Scholar
  14. 14.
    Rokach L (2010) Ensemble-based classifiers. Artif Intell Rev 33(1-2):1–39CrossRefGoogle Scholar
  15. 15.
    Kotsiantis SB (2013) Bagging and boosting variants for handling classifications problems: a survey. Knowl Eng Rev 29(01):78–100CrossRefGoogle Scholar
  16. 16.
    Dai Q, Han XM (2016) An efficient ordering-based ensemble pruning algorithm via dynamic programming. Appl Intell 44(4):816–830MathSciNetCrossRefGoogle Scholar
  17. 17.
    Ahmad A (2014) Decision tree ensembles based on kernel features. Appl Intell 41(3):855–869CrossRefGoogle Scholar
  18. 18.
    Breiman L (1996) Bagging predictors. Mach. Learn. 24(2):123–140zbMATHGoogle Scholar
  19. 19.
    Freund Y (1996) Experiments with a new boosting algorithm. In: Thirteenth international conference on machine learningGoogle Scholar
  20. 20.
    Breiman L (2001) Random Forests. Mach Learn 45(1):5–32CrossRefzbMATHGoogle Scholar
  21. 21.
    Rodriguez JJ, Kuncheva LI, Alonso CJ (2006) Rotation forest: a new classifier ensemble method. IEEE Trans Pattern Anal Mach Intell 28(10):1619–30CrossRefGoogle Scholar
  22. 22.
    Freund Y, Schapire RE (1997) A Decision-Theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1):119–139MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Xiao J, He C, Jiang X, Liu D (2010) A dynamic classifier ensemble selection approach for noise data. Inf Sci 180(18): 3402–3421CrossRefGoogle Scholar
  24. 24.
    Mao S, Jiao LC, Xiong L, Gou S (2011) Greedy optimization classifiers ensemble based on diversity. Pattern Recogn 44(6):1245–1261CrossRefzbMATHGoogle Scholar
  25. 25.
    Antal B (2016) Classifier ensemble creation via false labelling. Knowl-Based Syst 89(C):278–287Google Scholar
  26. 26.
    Elyan E, Gaber MM (2017) A genetic algorithm approach to optimising random forests applied to class engineered data. Information Sciences 384:220–234Google Scholar
  27. 27.
    Kuncheva LI (2013) A bound on kappa-error diagrams for analysis of classifier ensembles. IEEE Trans Knowl Data Eng 25(3):494–501CrossRefGoogle Scholar
  28. 28.
    Sluban B, Lavrač N (2015) Relating ensemble diversity and performance: a study in class noise detection. Neurocomputing 160:120–131CrossRefGoogle Scholar
  29. 29.
    Schapire RE, Freund Y, Barlett P, Lee WS (1997) Boosting the margin: a new explanation for the effectiveness of voting methods. Morgan Kaufmann Publishers Inc., pp 322–330Google Scholar
  30. 30.
    Wang L, Sugiyama M, Jing Z, Yang C, et al (2011) A refined margin analysis for boosting algorithms via equilibrium margin. J Mach Learn Res 12(2):1835–1863MathSciNetzbMATHGoogle Scholar
  31. 31.
    Gao W, Zhou ZH (2013) On the doubt about margin explanation of boosting. Artif Intell 203(5):1–18MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Hu Q, Li L, Wu X, Schaefer G, et al (2014) Exploiting diversity for optimizing margin distribution in ensemble learning. Knowl-Based Syst 67:90–104CrossRefGoogle Scholar
  33. 33.
    Li L, Zou B, Hu Q, Wu X, et al (2013) Dynamic classifier ensemble using classification confidence. Neurocomputing 99:581–591CrossRefGoogle Scholar
  34. 34.
    Sun B, Ng WWY, Yeung DS, Chan PPK (2013) Hyper-parameter selection for sparse LS-SVM via minimization of its localized generalization error. Int J Wavelets Multiresolution Inf Process 11(03):1350030Google Scholar
  35. 35.
    Zhang H, Li M (2014) RWO-Sampling: a random walk over-sampling approach to imbalanced data classification. Inf Fusion 20:99–116CrossRefGoogle Scholar
  36. 36.
    Asuncion A (2007) And D. UCI machine learning repository, NewmanGoogle Scholar
  37. 37.
    Alcalá-Fdez J, Sánchez L, García S, del Jesus MJ , et al (2009) KEEL: a software tool to assess evolutionary algorithms for data mining problems. Soft Comput 13(3):307–318CrossRefGoogle Scholar
  38. 38.
    Hall M, Frank E, Holmes G, Pfahringer B, et al (2009) The WEKA data mining software: an update. SIGKDD Explor Newsl 11(1):10–18CrossRefGoogle Scholar
  39. 39.
    Chang CC, Lin CJ (2007) LIBSVM: A library for support vector machines. ACM Trans Intell Syst Technol 2(3, article 27):389–396Google Scholar
  40. 40.
    Mukherjee I, Schapire RE (2011) A theory of multiclass boosting. J Mach Learn Res 14(1):437–497MathSciNetzbMATHGoogle Scholar
  41. 41.
    Dem and J. Ar (2006) Statistical Comparisons of Classifiers over Multiple Data Sets. J Mach Learn Res 7(1):1–30Google Scholar
  42. 42.
    Hodges JL, Lehmann EL (1962) Rank methods for combination of independent experiments in analysis of variance. Ann Math Stat 33(2):482–497MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    Holm S (1979) A simple sequentially rejective multiple test procedure. Scand J Stat 6(2):65–70MathSciNetzbMATHGoogle Scholar
  44. 44.
    Tang EK, Suganthan PN, Yao X (2006) An analysis of diversity measures. Mach Learn 65(1):247–271CrossRefGoogle Scholar
  45. 45.
    Tsymbal A, Pechenizkiy M, Cunningham P (2005) Diversity in search strategies for ensemble feature selection. Inf Fusion 6(1):83–98CrossRefGoogle Scholar
  46. 46.
    Dai Q (2013) A competitive ensemble pruning approach based on cross-validation technique. Knowl-Based Syst 37(2):394–414CrossRefGoogle Scholar
  47. 47.
    Dai Q, Yao CS (2016) A hierarchical and parallel branch-and-bound ensemble selection algorithm. Appl Intell:1–17Google Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  • Zhi Chen
    • 1
  • Tao Lin
    • 1
    • 2
    Email author
  • Rui Chen
    • 1
  • Yingtao Xie
    • 1
  • Hongyan Xu
    • 1
  1. 1.College of Computer ScienceSichuan UniversityChengduChina
  2. 2.Sichuan UniversityChengduChina

Personalised recommendations