A Novel Selective Ensemble Learning Based on K-means and Negative Correlation

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10040)

Abstract

Selective ensemble learning has drawn high attention for improving the diversity of the ensemble learning. However, the performance is limited by the conflicts and redundancies among its child classifiers. In order to solve these problems, we put forward a novel method called KNIA. The method mainly makes use of K-means algorithm, which is used in the integration algorithm as an effective measure to choose the representative classifiers. Then, negative correlation theory is used to select the diversity of classifiers derived from the representative classifiers. Compared with the classical selective learning, our algorithm which is inverse growth process can improve the generalization ability in the condition of ensuring the accuracy. The extensive experiments demonstrate that the robustness and precision of the proposed method outperforms four classical algorithms from multiple UCI data sets.

Keywords

Ensemble learning K-means Negative correlation Neural network 

References

  1. 1.
    Gu, B., Sheng, V.S., Tay, K.Y., Romano, X., Li, S.: Incremental support vector learning for ordinal regression. IEEE Trans. Neural Netw. Learn. Syst. 26(7), 1403–1416 (2015)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Gu, B., Sheng, V.S.: A robust regularization path algorithm for -support vector classification. IEEE Trans. Neural Netw. Learn. Syst. (2016). doi:10.1109/TNNLS.2016.2527796
  3. 3.
    Gu, B., Sun, X., Sheng, V.S.: Structural minimax probability machine. IEEE Trans. Neural Netw. Learn. Syst. (2016). doi:10.1109/TNNLS.2016.2544779
  4. 4.
    Hansen, L.K., Salamon, P.: Neural network ensemble. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)CrossRefGoogle Scholar
  5. 5.
    Dietterich, T.G.: Machine learning research: four current directions. AI Mag. 18(4), 97–136 (1977)Google Scholar
  6. 6.
    Liu, X., Wang, L., Huang, G.B., et al.: Multiple kernel extreme learning machine. Neurocomputing 149, 253–264 (2015)CrossRefGoogle Scholar
  7. 7.
    Yao, W., Chen, X.Q., Zhao, Y.: Efficient resources provisioning based on load forecasting in cloud. IEEE Trans. Neural Netw. Learn. Syst. 23(2), 247–259 (2012)CrossRefGoogle Scholar
  8. 8.
    Tang, J., Cao, Y., Xiao, J., et al.: Predication of plasma concentration of remifentanil based on Elman neural network. J. Central South Univ. 20, 3187–3192 (2013)CrossRefGoogle Scholar
  9. 9.
    Krogh, P.S.A.: Learning with ensembles: how over-fitting can be useful. In: Proceedings of the 1995 Conference, vol. 8, p. 190 (1996)Google Scholar
  10. 10.
    Liu, J., Chen, H., Cai, B., et al.: State estimation of connected vehicles using a nonlinear ensemble filter. J. Central South Univ. 22, 2406–2415 (2015)CrossRefGoogle Scholar
  11. 11.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Liu, Y., Yao, X., Higuchi, T.: Evolutionary ensembles with negative correlation learning. IEEE Trans. Evol. Comput. 4(4), 380–387 (2000)CrossRefGoogle Scholar
  13. 13.
    Tao, H., Ma, X., Qiao, M.: Subspace selective ensemble algorithm based on feature clustering. J. Comput. 8(2), 509–516 (2013)CrossRefGoogle Scholar
  14. 14.
    Cheng, X., Guo, H.: The technology of selective multiple classifiers ensemble based on kernel clustering. In: Second International Symposium on Intelligent Information Technology Application, IITA 2008, vol. 2, pp. 146–150. IEEE (2008)Google Scholar
  15. 15.
    Zhou, Z.H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1), 239–263 (2002)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)MathSciNetMATHGoogle Scholar
  17. 17.
    Efron, B., Tibshirani, R.J.: An Introduction to the Bootstrap. CRC Press, Boca Raton (1994)MATHGoogle Scholar
  18. 18.
    Freund, Y.: Boosting a weak learning algorithm by majority. Inf. Comput. 121(2), 256–285 (1995)MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)MathSciNetCrossRefMATHGoogle Scholar
  20. 20.
    Bryll, R., Gutierrez-Osuna, R., Quek, F.: Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets. Pattern Recogn. 36(6), 1291–1302 (2003)CrossRefMATHGoogle Scholar
  21. 21.
    Hu, Q., Yu, D., Xie, Z., et al.: EROS: ensemble rough subspaces. Pattern Recogn. 40(12), 3728–3739 (2007)CrossRefMATHGoogle Scholar
  22. 22.
    Thompson, S.: Pruning boosted classifiers with a real valued genetic algorithm. Knowl.-Based Syst. 12(5), 277–284 (1999)CrossRefGoogle Scholar
  23. 23.
    Fu, Q., Hu, S.X., Zhao, S.Y.: A PSO-based approach for neural network ensemble. J. Zhejiang Univ. (Eng. Sci.) 38(12), 1596–1600 (2004)Google Scholar
  24. 24.
    Margineantu, D.D., Dietterich, T.G.: Pruning adaptive boosting. ICML 97, 211–218 (1997)Google Scholar
  25. 25.
    Ting, K.M., Witten, I.H.: Issues in stacked generalization. J. Artif. Intell. Res. (JAIR) 10, 271–289 (1999)MATHGoogle Scholar
  26. 26.
    Jordan, M.I., Jacobs, R.A.: Hierarchical mixtures of experts and the EM algorithm. Neural Comput. 6(2), 181–214 (1994)CrossRefGoogle Scholar
  27. 27.
    Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)MathSciNetMATHGoogle Scholar
  28. 28.
    Liu, Y., Yao, X.: Ensemble learning via negative correlation. Neural Netw. 12(10), 1399–1404 (1999)CrossRefGoogle Scholar
  29. 29.
    Minku, F.L., Inoue, H., Yao, X.: Negative correlation in incremental learning. Nat. Comput. 8(2), 289–320 (2009)MathSciNetCrossRefMATHGoogle Scholar
  30. 30.
    Liu, Y., Yao, X.: Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. Syst. Man Cybern. B Cybern. 29(6), 716–725 (1999)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.College of ComputerNational University of Defense TechnologyChangshaChina

Personalised recommendations