The Mixture of Neural Networks as Ensemble Combiner

  • Mercedes Fernández-Redondo
  • Joaquín Torres-Sospedra
  • Carlos Hernández-Espinosa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5064)


In this paper we propose two new ensemble combiners based on the Mixture of Neural Networks model. In our experiments, we have applied two different network architectures on the methods based on the Mixture of Neural Networks: the Basic Network (BN) and the Multilayer Feedforward Network (MF). Moreover, we have used ensembles of MF networks previously trained with Simple Ensemble to test the performance of the combiners we propose. Finally, we compare the mixture combiners proposed with three different mixture models and other traditional combiners. The results show that the mixture combiners proposed are the best way to build Multi-net systems among the methods studied in the paper in general.


Neural Network Mean Square Error Mixture Model Minimum Mean Square Error Basic Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Bishop, C.M.: Neural Networks for Pattern Recognition, New York, NY, USA. Oxford University Press, Inc., Oxford (1995)Google Scholar
  2. 2.
    Cho, S.-B., Kim, J.H.: Combining multiple neural networks by fuzzy integral for robust classification. IEEE Transactions on System, Man, and Cybernetics 25(2), 380–384 (1995)CrossRefGoogle Scholar
  3. 3.
    Fernndez-Redondo, M., Hernndez-Espinosa, C., Torres-Sospedra, J.: Multilayer feedforward ensembles for classification problems. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 744–749. Springer, Heidelberg (2004)Google Scholar
  4. 4.
    Gader, P.D., Mohamed, M.A., Keller, J.M.: Fusion of handwritten word classifiers. Pattern Recogn. Lett. 17(6), 577–584 (1996)CrossRefGoogle Scholar
  5. 5.
    Ho, T.K., Hull, J.J., Srihari, S.N.: Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence 16(1), 66–75 (1994)CrossRefGoogle Scholar
  6. 6.
    Jacobs, R.A.: Methods for combining experts’ probability assessments. Neural Comput. 7(5), 867–888 (1995)CrossRefGoogle Scholar
  7. 7.
    Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)CrossRefGoogle Scholar
  8. 8.
    Jimenez, D.: Dynamically weighted ensemble neural networks for classification. In: Proceedings of the 1998 International Joint Conference on Neural Networks, IJCNN 1998, pp. 753–756 (1998)Google Scholar
  9. 9.
    Krogh, A., Vedelsby, J.: Neural network ensembles, cross validation, and active learning. In: Tesauro, G., Touretzky, D., Leen, T. (eds.) Advances in Neural Information Processing Systems, vol.  7, pp. 231–238. The MIT Press, Cambridge (1995)Google Scholar
  10. 10.
    Kuncheva, L.I.: Combining Pattern Classifiers: Methods and Algorithms. Wiley-Interscience, Chichester (2004)zbMATHGoogle Scholar
  11. 11.
    Lam, L., Suen, C.Y.: Optimal combinations of pattern classifiers. Pattern Recogn. Lett. 16(9), 945–954 (1995)CrossRefGoogle Scholar
  12. 12.
    Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998),
  13. 13.
    Sharkey, A.J. (ed.): Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems. Springer, Heidelberg (1999)zbMATHGoogle Scholar
  14. 14.
    Torres-Sospedra, J., Fernndez-Redondo, M., Hernndez-Espinosa, C.: A research on combination methods for ensembles of multilayer feedforward. In: IJCNN 2005 Proceedings, pp. 1125–1130 (2005)Google Scholar
  15. 15.
    Torres-Sospedra, J., Hernndez-Espinosa, C., Fernndez-Redondo, M.: Combinacin de conjuntos de redes MF. In: SICO 2005 Proceedings, pp. 11–18. Thomson (2005)Google Scholar
  16. 16.
    Torres-Sospedra, J., Hernndez-Espinosa, C., Fernndez-Redondo, M.: Designing a new multilayer feedforward modular network for classification problems. In: WCCI 2006 proceedings, pp. 2263–2268 (2006)Google Scholar
  17. 17.
    Tumer, K., Ghosh, J.: Error correlation and error reduction in ensemble classifiers. Connection Science 8(3-4), 385–403 (1996)CrossRefGoogle Scholar
  18. 18.
    Verikas, A., Lipnickas, A., Malmqvist, K., Bacauskiene, M., Gelzinis, A.: Soft combination of neural classifiers: A comparative study. Pattern Recognition Letters 20(4), 429–444 (1999)CrossRefGoogle Scholar
  19. 19.
    Wanas, N.M., Kamel, M.S.: Decision fusion in neural network ensembles. In: Proceedings of the 2001 International Joint Conference on Neural Networks, IJCNN 2001, vol. 4, pp. 2952–2957 (2001)Google Scholar
  20. 20.
    Xu, L., Krzyzak, A., Suen, C.Y.: Methods of combining multiple classifiers and their applications to handwriting recognition. IEEE Transactions on Systems, Man, and Cybernetics 22(3), 418–435 (1992)CrossRefGoogle Scholar
  21. 21.
    Zimmermann, H.-J., Zysno, P.: Decision and evaluations by hierarchical aggregation of information. Fuzzy Sets and Systems 10(3), 243–260 (1984)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Mercedes Fernández-Redondo
    • 1
  • Joaquín Torres-Sospedra
    • 1
  • Carlos Hernández-Espinosa
    • 1
  1. 1.Departamento de Ingenieria y Ciencia de los ComputadoresUniversitat Jaume ICastellonSpain

Personalised recommendations