Improving Boosting Methods by Generating Specific Training and Validation Sets

  • Joaquín Torres-Sospedra
  • Carlos Hernández-Espinosa
  • Mercedes Fernández-Redondo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7063)


In previous researches it can been seen that Bagging, Boosting and Cross-Validation Committee can provide good performance separately. In this paper, Boosting methods are mixed with Bagging and Cross-Validation Committee in order to generate accurate ensembles and take benefit from all these alternatives. In this way, the networks are trained according to the boosting methods but the specific training and validation set are generated according to Bagging or Cross-Validation. The results show that the proposed methodologies BagBoosting and Cross-Validated Boosting outperform the original Boosting ensembles.


Ensembles of ANN Specific sets Boosting alternatives 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Asuncion, A., Newman, D.: UCI machine learning repository, University of California, Irvine, School of Information and Computer Sciences (2007)Google Scholar
  2. 2.
    Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Inc., New York (1995)zbMATHGoogle Scholar
  3. 3.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)zbMATHGoogle Scholar
  4. 4.
    Fernández-Redondo, M., Hernández-Espinosa, C., Torres-Sospedra, J.: Multilayer feedforward ensembles for classification problems. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 744–749. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  6. 6.
    Hernández-Espinosa, C., Torres-Sospedra, J., Fernández-Redondo, M.: New experiments on ensembles of multilayer feedforward for classification problems. In: Proceedings of IJCNN 2005, pp. 1120–1124 (2005)Google Scholar
  7. 7.
    Kuncheva, L.I., Whitaker, C.J.: Using diversity with three variants of boosting: Aggressive, conservative, and inverse. In: Roli, F., Kittler, J. (eds.) MCS 2002. LNCS, vol. 2364, pp. 81–90. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  8. 8.
    Oza, N.C.: Boosting with averaged weight vectors. In: Windeatt, T., Roli, F. (eds.) MCS 2003. LNCS, vol. 2709, pp. 15–24. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  9. 9.
    Parmanto, B., Munro, P.W., Doyle, H.R.: Improving committee diagnosis with resampling techniques. In: Advances in Neural Information Processing Systems, pp. 882–888 (1996)Google Scholar
  10. 10.
    Raviv, Y., Intratorr, N.: Bootstrapping with noise: An effective regularization technique. Connection Science 8, 356–372 (1996)CrossRefGoogle Scholar
  11. 11.
    Ripley, B.D.: Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge (1996)CrossRefzbMATHGoogle Scholar
  12. 12.
    Torres-Sospedra, J.: Ensembles of Artificial Neural Networks: Analysis and Development of Design Methods. Ph.D Thesis, Universitat Jaume I (2011)Google Scholar
  13. 13.
    Tumer, K., Ghosh, J.: Error correlation and error reduction in ensemble classifiers. Connection Science 8(3-4), 385–403 (1996)CrossRefGoogle Scholar
  14. 14.
    Verikas, A., Lipnickas, A., Malmqvist, K., Bacauskiene, M., Gelzinis, A.: Soft combination of neural classifiers: A comparative study. Pattern Recognition Letters 20(4), 429–444 (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Joaquín Torres-Sospedra
    • 1
  • Carlos Hernández-Espinosa
    • 1
  • Mercedes Fernández-Redondo
    • 1
  1. 1.Department of Computer Science and EngineeringUniversitat Jaume ICastellónSpain

Personalised recommendations