Multilayer Feedforward Ensembles for Classification Problems

  • Mercedes Fernández-Redondo
  • Carlos Hernández-Espinosa
  • Joaquín Torres-Sospedra
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3316)


As shown in the bibliography, training an ensemble of networks is an interesting way to improve the performance with respect to a single network. However there are several methods to construct the ensemble and there are no complete results showing which one could be the most appropriate. In this paper we present a comparison of eleven different methods. We have trained ensembles of 3, 9, 20 and 40 networks to show results in a wide spectrum of values. The results show that the improvement in performance above 9 networks in the ensemble depends on the method but it is usually marginal. Also, the best method is called “Decorrelated” and uses a penalty term in the usual Backpropagation function to decorrelate the network outputs in the ensemble.


Penalty Term Ensemble Method Error Reduction Training Pattern Training Point 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Tumer, K., Ghosh, J.: Error correlation and error reduction in ensemble classifiers. Connection Science 8(3&4), 385–404 (1996)CrossRefGoogle Scholar
  2. 2.
    Raviv, Y., Intrator, N.: Bootstrapping with Noise: An Effective Regularization Technique. Connection Science 8(3&4), 355–372 (1996)CrossRefGoogle Scholar
  3. 3.
    Drucker, H., Cortes, C., Jackel, D., et al.: Boosting and Other Ensemble Methods. Neural Computation 6, 1289–1301 (1994)zbMATHCrossRefGoogle Scholar
  4. 4.
    Verikas, A., Lipnickas, A., et al.: Soft combination of neural classifiers: A comparative study. Pattern Recognition Letters 20, 429–444 (1999)CrossRefGoogle Scholar
  5. 5.
    Breiman, L.: Bagging Predictors. Machine Learning 24, 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  6. 6.
    Freund, Y., Schapire, R.: Experiments with a New Boosting Algorithm. In: Proceedings of the Thirteenth International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  7. 7.
    Rosen, B.: Ensemble Learning Using Decorrelated Neural Networks. Connection Science 8(3&4), 373–383 (1996)CrossRefGoogle Scholar
  8. 8.
    Auda, G., Kamel, M.: EVOL: Ensembles Voting On-Line. In: Proc. of the World Congress on Computational Intelligence, pp. 1356–1360 (1998)Google Scholar
  9. 9.
    Liu, Y., Yao, X.: A Cooperative Ensemble Learning System. In: Proc. of the World Congress on Computational Intelligence, pp. 2202–2207 (1998)Google Scholar
  10. 10.
    Jang, M., Cho, S.: Ensemble Learning Using Observational Learning Theory. In: Proceedings of the International Joint Conference on Neural Networks, vol. 2, pp. 1281–1286 (1999)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Mercedes Fernández-Redondo
    • 1
  • Carlos Hernández-Espinosa
    • 1
  • Joaquín Torres-Sospedra
    • 1
  1. 1.Dept. de Ingeniería y Ciencia de los ComputadoresUniversidad Jaume I.CastellonSpain

Personalised recommendations