Abstract
By introducing an adaptive error function, a balanced ensemble learning had been developed from negative correlation learning. In this paper, balanced ensemble learning had been used to train a set of small neural networks with one hidden node only. The experimental results suggest that balanced ensemble learning is able to create a strong ensemble by combining a set of weak learners. Different to bagging and boosting where learners are trained on randomly re-sampled data from the original set of patterns, learners could be trained on all available data in balanced ensemble learning. It is interesting to be discovered that learners by balanced ensemble learning could be just be slightly better than random guessing even if they had been trained on the whole data set. Another difference among these ensemble learning methods is that learners are trained simultaneously in balanced ensemble learning when learners are trained independently in bagging, and sequentially in boosting.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Liu, Y.: A balanced ensemble learning with adaptive error functions. In: Kang, L., Cai, Z., Yan, X., Liu, Y. (eds.) ISICA 2008. LNCS, vol. 5370, pp. 1–8. Springer, Heidelberg (2008)
Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)
Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)
Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. on Pattern Analysis and Machine Intelligence 12(10), 993–1001 (1990)
Sarkar, D.: Randomness in generalization ability: a source to improve it. IEEE Trans. on Neural Networks 7(3), 676–685 (1996)
Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)
Jacobs, R.A., Jordan, M.I., Barto, A.G.: Task decomposition through competition in a modular connectionist architecture: the what and where vision task. Cognitive Science 15, 219–250 (1991)
Liu, Y., Yao, X.: Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics 29(6), 716–725 (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Liu, Y. (2009). Balanced Learning for Ensembles with Small Neural Networks. In: Cai, Z., Li, Z., Kang, Z., Liu, Y. (eds) Advances in Computation and Intelligence. ISICA 2009. Lecture Notes in Computer Science, vol 5821. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04843-2_18
Download citation
DOI: https://doi.org/10.1007/978-3-642-04843-2_18
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-04842-5
Online ISBN: 978-3-642-04843-2
eBook Packages: Computer ScienceComputer Science (R0)