Selective Neural Network Ensemble Based on Clustering

  • Haixia Chen
  • Senmiao Yuan
  • Kai Jiang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)


To improve the generalization ability of neural network ensemble, a selective method based on clustering is proposed. The method follows the overproduce and choose paradigm. It first produces a large number of individual networks, and then clusters these networks according to their diversity. Networks with the highest classification accuracies in each cluster are selected for the final integration. Experiments on ten UCI data sets showed the superiority of the proposed algorithm to the other two similiar ensemble learning algorithms.


Base Classifier Generalization Performance Ensemble Size Final Integration High Classification Accuracy 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Krogh, A., Vedelsby, J.: Neural Network Ensembles, Cross Validation, and Active Learning. In: Tesauro, G., Touretzky, D., Leen, T. (eds.) Advances In Neural Information Processing Systems, vol. 7, pp. 231–238. MIT Press, Cambridge (1995)Google Scholar
  2. 2.
    Kuncheva, L.I., Skurichina, M., Duin, R.P.W.: An Experimental Study on Diversity for Bagging and Boosting with Linear Classifiers. Information Fusion 3, 245–258 (2002)CrossRefGoogle Scholar
  3. 3.
    Bauer, E., Kohavi, R.: An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants. Machine Learning 36(1,2), 105–139 (1999)CrossRefGoogle Scholar
  4. 4.
    Brodley, C., Lane, T.: Creating and Exploiting Coverage and Diversity. In: Proc. AAAI 1996 Workshop on Integrating Multiple Learned Models, pp. 8–14 (1996)Google Scholar
  5. 5.
    Giacinto, G., Roli, F., Fumera, G.: Design of Effective Multiple Classifier Systems by Clustering of Classifiers. In: Proc. of ICPR 2000, 15th Int’l Conf. on Pattern Recognition, Barcelona, Spain, pp. 3–8 (2000)Google Scholar
  6. 6.
    Patridge, D., Yates, W.B.: Engineering Multiversion Neural-Net Systems. Neural Computation 8(4), 869–893 (1996)CrossRefGoogle Scholar
  7. 7.
    Rätsch, G., Onoda, T., Müller, K.R.: Soft Margins for Adaboost. Machine Learning 42(3), 287–320 (2001)MATHCrossRefGoogle Scholar
  8. 8.
    Oliveira, L.S., Morita, M., Sabourin, R., Bortolozzi, F.: Multi-Objective Genetic Algorithms To Create Ensemble of Classifiers. In: Coello Coello, C.A., Hernández Aguirre, A., Zitzler, E. (eds.) EMO 2005. LNCS, vol. 3410, pp. 592–606. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  9. 9.
    Ho, T.K.: The Random Subspace Method for Constructing Decision Forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8), 832–844 (1998)CrossRefGoogle Scholar
  10. 10.
    Chindaro, S., Sirlantzis, K., Fairhurst, M.: Analysis and Modeling of Diversity Contribution to Ensemble-Based Texture Recognition Performance. In: Oza, N.C., Polikar, R., Kittler, J., Roli, F. (eds.) MCS 2005. LNCS, vol. 3541, pp. 387–396. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  11. 11.
    Blake, C.L., Merz, C.J.: UCI Repository of Machine Learning Databases. Dept. of Information and Computer Science. University of California, Irvine (1998), Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Haixia Chen
    • 1
  • Senmiao Yuan
    • 1
  • Kai Jiang
    • 2
  1. 1.College of Computer Science and TechnologyJilin UniversityChangchunChina
  2. 2.The 45th Research Institute of CETCBeijingChina

Personalised recommendations