Advertisement

Ensembles of Multilayer Feedforward: Some New Results

  • Joaquín Torres-Sospedra
  • Carlos Hernández-Espinosa
  • Mercedes Fernández-Redondo
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3512)

Abstract

As shown in the bibliography, training an ensemble of networks is an interesting way to improve the performance with respect to a single network. However there are several methods to construct the ensemble. In this paper we present some new results in a comparison of twenty different methods. We have trained ensembles of 3, 9, 20 and 40 networks to show results in a wide spectrum of values. The results show that the improvement in performance above 9 networks in the ensemble depends on the method but it is usually low. Also, the best method for a ensemble of 3 networks is called “Decorrelated” and uses a penalty term in the usual Backpropagation function to decorrelate the network outputs in the ensemble. For the case of 9 and 20 networks the best method is conservative boosting. And finally for 40 networks the best method is Cels.

Keywords

Good Method Ensemble Method Error Reduction Single Network Individual Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Tumer, K., Ghosh, J.: Error correlation and error reduction in ensemble classifiers. Connection Science 8(3&4), 385–404 (1996)CrossRefGoogle Scholar
  2. 2.
    Raviv, Y., Intrator, N.: Bootstrapping with Noise: An Effective Regularization Technique. Connection Science 8(3&4), 355–372 (1996)CrossRefGoogle Scholar
  3. 3.
    Drucker, H., Cortes, C., Jackel, D., et al.: Boosting and Other Ensemble Methods. Neural Computation 6, 1289–1301 (1994)zbMATHCrossRefGoogle Scholar
  4. 4.
    Fernández-Redondo, M., Hernández-Espinosa, C., Torres-Sospedra, J.: Classification by multilayer feedforward ensembles. In: Yin, F.-L., Wang, J., Guo, C. (eds.) ISNN 2004. LNCS, vol. 3173, pp. 852–857. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Verikas, A., Lipnickas, A., Malmqvist, K., Bacauskiene, M., Gelzinis, A.: Soft Combination of neural classifiers: A comparative study. Pattern Recognition Letters 20, 429–444 (1999)CrossRefGoogle Scholar
  6. 6.
    Oza, N.C.: Boosting with Averaged Weight Vectors. In: Windeatt, T., Roli, F. (eds.) MCS 2003. LNCS, vol. 2709, pp. 15–24. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  7. 7.
    Kuncheva, L.I.: Error Bounds for Aggressive and Conservative Adaboost. In: Windeatt, T., Roli, F. (eds.) MCS 2003. LNCS, vol. 2709, pp. 25–34. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  8. 8.
    Breiman, L.: Arcing Classifiers. Annals of Statistic 26(3), 801–849 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Liu, Y., Yao, X., Higuchi, T.: Evolutionary Ensembles with Negative Correlation Learning. IEEE Trans. On Evolutionary Computation 4(4), 380–387 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Joaquín Torres-Sospedra
    • 1
  • Carlos Hernández-Espinosa
    • 1
  • Mercedes Fernández-Redondo
    • 1
  1. 1.Dept. de Ingeniería y Ciencia de los ComputadoresUniversidad Jaume I.CastellonSpain

Personalised recommendations