Combining MF Networks: A Comparison Among Statistical Methods and Stacked Generalization
The two key factors to design an ensemble of neural networks are how to train the individual networks and how to combine the different outputs to get a single output. In this paper we focus on the combination module. We have proposed two methods based on Stacked Generalization as the combination module of an ensemble of neural networks. In this paper we have performed a comparison among the two versions of Stacked Generalization and six statistical combination methods in order to get the best combination method. We have used the mean increase of performance and the mean percentage or error reduction for the comparison. The results show that the methods based on Stacked Generalization are better than classical combiners.
KeywordsCombination Method Error Reduction Single Network Individual Network Correct Class
Unable to display preview. Download preview PDF.
- 2.Raviv, Y., Intratorr, N.: Bootstrapping with noise: An effective regularization technique. Connection Science, Special issue on Combining Estimators 8, 356–372 (1996)Google Scholar
- 4.Hernandez-Espinosa, C., Torres-Sospedra, J., Fernandez-Redondo, M.: New experiments on ensembles of multilayer feedforward for classification problems. In: Proceedings of International Conference on Neural Networks, IJCNN 2005, Montreal, Canada, pp. 1120–1124 (2005)Google Scholar
- 5.Torres-Sospedra, J., Fernandez-Redondo, M., Hernandez-Espinosa, C.: A research on combination methods for ensembles of multilayer feedforward. In: Proceedings of International Conference on Neural Networks, IJCNN 2005, Montreal, Canada, pp. 1125–1130 (2005)Google Scholar
- 8.Jimenez, D., Walsh, N.: Dynamically weighted ensemble neural networks for classification. IEEE World Congress on Computational Intelligence 1, 753–756 (1998)Google Scholar
- 10.Newman, D.J., Hettich, S., Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)Google Scholar
- 11.Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: International Conference on Machine Learning, pp. 148–156 (1996)Google Scholar