Advertisement

Adding Diversity in Ensembles of Neural Networks by Reordering the Training Set

  • Joaquín Torres-Sospedra
  • Carlos Hernández-Espinosa
  • Mercedes Fernández-Redondo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5163)

Abstract

When an ensemble of neural networks is designed, it is necessary to provide enough diversity to the different networks in the ensemble. We propose in this paper one new method of providing diversity which consists on reordering the training set patterns during the training. In this paper three different algorithms are applied in the process of building ensembles with Simple Ensemble and Cross-Validation. The first method consists in using the original training set, in the second one the training set is reordered before the training algorithm is applied and the third one consists in reordering the training set at the beginning of each iteration of BackPropagation. With the experiments proposed we want to empirically demonstrate that reordering patterns during the training is a valid source to provide diversity to the networks of an ensemble. The results show that the performance of the original ensemble methods can be improved by reordering the patterns during the training. Moreover, this new source of diversity can be extended to more complex ensemble methods.

Keywords

Mean Square Error Training Algorithm Ensemble Method Single Network Multilayer Feedforward Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  2. 2.
    Drucker, H., Cortes, C., Jackel, L.D., LeCun, Y., Vapnik, V.: Boosting and other ensemble methods. Neural Computation 6(6), 1289–1301 (1994)zbMATHCrossRefGoogle Scholar
  3. 3.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar
  4. 4.
    Raviv, Y., Intratorr, N.: Bootstrapping with noise: An effective regularization technique. Connection Science, Special issue on Combining Estimators 8, 356–372 (1996)Google Scholar
  5. 5.
    Wolpert, D.H.: Stacked generalization. Neural Networks 5(6), 1289–1301 (1994)Google Scholar
  6. 6.
    Jordan, M.I., Jacobs, R.A.: Hierarchical mixtures of experts and the EM algorithm. Neural Computation 6, 181–214 (1994)CrossRefGoogle Scholar
  7. 7.
    Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, New York (1995)Google Scholar
  8. 8.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley-Interscience, Chichester (2004)zbMATHGoogle Scholar
  9. 9.
    Fernández-Redondo, M., Hernández-Espinosa, C., Torres-Sospedra, J.: Multilayer feedforward ensembles for classification problems. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 744–749. Springer, Heidelberg (2004)Google Scholar
  10. 10.
    Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Joaquín Torres-Sospedra
    • 1
  • Carlos Hernández-Espinosa
    • 1
  • Mercedes Fernández-Redondo
    • 1
  1. 1.Departamento de Ingenieria y Ciencia de los ComputadoresUniversitat Jaume ICastellonSpain

Personalised recommendations