Advertisement

Using Bagging and Cross-Validation to Improve Ensembles Based on Penalty Terms

  • Joaquín Torres-Sospedra
  • Carlos Hernández-Espinosa
  • Mercedes Fernández-Redondo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7063)

Abstract

Decorrelated and CELS are two ensembles that modify the learning procedure to increase the diversity among the networks of the ensemble. Although they provide good performance according to previous comparatives, they are not as well known as other alternatives, such as Bagging and Boosting, which modify the learning set in order to obtain classifiers with high performance. In this paper, two different procedures are introduced to Decorrelated and CELS in order to modify the learning set of each individual network and improve their accuracy. The results show that these two ensembles are improved by using the two proposed methodologies as specific set generators.

Keywords

Ensembles with Penalty Terms Specific Sets Bagging CVC 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Asuncion, A., Newman, D.: UCI machine learning repository, University of California, Irvine, School of Information and Computer Sciences (2007)Google Scholar
  2. 2.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)zbMATHGoogle Scholar
  3. 3.
    Fernández-Redondo, M., Hernández-Espinosa, C., Torres-Sospedra, J.: Multilayer feedforward ensembles for classification problems. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 744–749. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  4. 4.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  5. 5.
    Hernández-Espinosa, C., Torres-Sospedra, J., Fernández-Redondo, M.: New experiments on ensembles of multilayer feedforward for classification problems. In: Proceedings of IJCNN 2005, pp. 1120–1124 (2005)Google Scholar
  6. 6.
    Liu, Y., Yao, X.: Simultaneous training of negatively correlated neural networks in an ensemble. IEEE T. Syst. Man. Cyb. 29, 716 (1999)CrossRefGoogle Scholar
  7. 7.
    Parmanto, B., Munro, P.W., Doyle, H.R.: Improving committee diagnosis with resampling techniques. In: Advances in Neural Information Processing Systems, pp. 882–888 (1996)Google Scholar
  8. 8.
    Rosen, B.E.: Ensemble learning using decorrelated neural networks. Connection Science 8(3-4), 373–384 (1996)CrossRefGoogle Scholar
  9. 9.
    Torres-Sospedra, J., Hernández-Espinosa, C., Fernández-Redondo, M.: Adaptive boosting: Dividing the learning set to increase the diversity and performance of the ensemble. In: King, I., Wang, J., Chan, L.-W., Wang, D. (eds.) ICONIP 2006. LNCS, vol. 4232, pp. 688–697. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  10. 10.
    Torres-Sospedra, J.: Ensembles of Artificial Neural Networks: Analysis and Development of Design Methods. Ph.D Thesis, Universitat Jaume I (2011)Google Scholar
  11. 11.
    Tumer, K., Ghosh, J.: Error correlation and error reduction in ensemble classifiers. Connection Science 8(3-4), 385–403 (1996)CrossRefGoogle Scholar
  12. 12.
    Yildiz, O.T., Alpaydin, E.: Ordering and finding the best of k>2 supervised learning algorithms. IEEE T. Pattern. Anal. 28(3), 392–402 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Joaquín Torres-Sospedra
    • 1
  • Carlos Hernández-Espinosa
    • 1
  • Mercedes Fernández-Redondo
    • 1
  1. 1.Department of Computer Science and EngineeringUniversitat Jaume ICastellónSpain

Personalised recommendations