Skip to main content

Balanced Learning for Ensembles with Small Neural Networks

  • Conference paper
Advances in Computation and Intelligence (ISICA 2009)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5821))

Included in the following conference series:

Abstract

By introducing an adaptive error function, a balanced ensemble learning had been developed from negative correlation learning. In this paper, balanced ensemble learning had been used to train a set of small neural networks with one hidden node only. The experimental results suggest that balanced ensemble learning is able to create a strong ensemble by combining a set of weak learners. Different to bagging and boosting where learners are trained on randomly re-sampled data from the original set of patterns, learners could be trained on all available data in balanced ensemble learning. It is interesting to be discovered that learners by balanced ensemble learning could be just be slightly better than random guessing even if they had been trained on the whole data set. Another difference among these ensemble learning methods is that learners are trained simultaneously in balanced ensemble learning when learners are trained independently in bagging, and sequentially in boosting.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Liu, Y.: A balanced ensemble learning with adaptive error functions. In: Kang, L., Cai, Z., Yan, X., Liu, Y. (eds.) ISICA 2008. LNCS, vol. 5370, pp. 1–8. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  2. Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)

    MATH  Google Scholar 

  3. Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)

    Google Scholar 

  4. Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. on Pattern Analysis and Machine Intelligence 12(10), 993–1001 (1990)

    Article  Google Scholar 

  5. Sarkar, D.: Randomness in generalization ability: a source to improve it. IEEE Trans. on Neural Networks 7(3), 676–685 (1996)

    Article  Google Scholar 

  6. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)

    Article  Google Scholar 

  7. Jacobs, R.A., Jordan, M.I., Barto, A.G.: Task decomposition through competition in a modular connectionist architecture: the what and where vision task. Cognitive Science 15, 219–250 (1991)

    Article  Google Scholar 

  8. Liu, Y., Yao, X.: Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics 29(6), 716–725 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Liu, Y. (2009). Balanced Learning for Ensembles with Small Neural Networks. In: Cai, Z., Li, Z., Kang, Z., Liu, Y. (eds) Advances in Computation and Intelligence. ISICA 2009. Lecture Notes in Computer Science, vol 5821. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04843-2_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04843-2_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04842-5

  • Online ISBN: 978-3-642-04843-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics