Skip to main content

A Balanced Ensemble Learning with Adaptive Error Functions

  • Conference paper
Advances in Computation and Intelligence (ISICA 2008)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5370))

Included in the following conference series:

Abstract

In the practice of designing neural network ensembles, it is common that a certain learning error function is defined and kept the same or fixed for each individual neural network in the whole learning process. Such fixed learning error function not only likely leads to over-fitting, but also makes learning slow on hard-learned data points in the data set. This paper presents a novel balanced ensemble learning approach that could make learning fast and robust. The idea of balanced ensemble learning is to define adaptive learning error functions for different individual neural networks in an ensemble, in which different individuals could have different formats of error functions in the learning process, and these error functions could be changed as well. Through shifting away from well-learned data and focusing on not-yet-learned data by changing error functions for each individual among the ensemble, a good balanced learning could be achieved for the learned ensemble.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. on Pattern Analysis and Machine Intelligence 12(10), 993–1001 (1990)

    Article  Google Scholar 

  2. Sarkar, D.: Randomness in generalization ability: a source to improve it. IEEE Trans. on Neural Networks 7(3), 676–685 (1996)

    Article  Google Scholar 

  3. Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)

    MATH  Google Scholar 

  4. Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)

    Google Scholar 

  5. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)

    Article  Google Scholar 

  6. Jacobs, R.A., Jordan, M.I., Barto, A.G.: Task decomposition through competition in a modular connectionist architecture: the what and where vision task. Cognitive Science 15, 219–250 (1991)

    Article  Google Scholar 

  7. Liu, Y., Yao, X.: Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics 29(6), 716–725 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Liu, Y. (2008). A Balanced Ensemble Learning with Adaptive Error Functions. In: Kang, L., Cai, Z., Yan, X., Liu, Y. (eds) Advances in Computation and Intelligence. ISICA 2008. Lecture Notes in Computer Science, vol 5370. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-92137-0_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-92137-0_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-92136-3

  • Online ISBN: 978-3-540-92137-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics