Advertisement

A Balanced Ensemble Learning with Adaptive Error Functions

  • Yong Liu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5370)

Abstract

In the practice of designing neural network ensembles, it is common that a certain learning error function is defined and kept the same or fixed for each individual neural network in the whole learning process. Such fixed learning error function not only likely leads to over-fitting, but also makes learning slow on hard-learned data points in the data set. This paper presents a novel balanced ensemble learning approach that could make learning fast and robust. The idea of balanced ensemble learning is to define adaptive learning error functions for different individual neural networks in an ensemble, in which different individuals could have different formats of error functions in the learning process, and these error functions could be changed as well. Through shifting away from well-learned data and focusing on not-yet-learned data by changing error functions for each individual among the ensemble, a good balanced learning could be achieved for the learned ensemble.

Keywords

Error Function Hide Node Ensemble Learning Individual Network Neural Network Ensemble 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. on Pattern Analysis and Machine Intelligence 12(10), 993–1001 (1990)CrossRefGoogle Scholar
  2. 2.
    Sarkar, D.: Randomness in generalization ability: a source to improve it. IEEE Trans. on Neural Networks 7(3), 676–685 (1996)CrossRefGoogle Scholar
  3. 3.
    Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)zbMATHGoogle Scholar
  4. 4.
    Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)Google Scholar
  5. 5.
    Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3, 79–87 (1991)CrossRefGoogle Scholar
  6. 6.
    Jacobs, R.A., Jordan, M.I., Barto, A.G.: Task decomposition through competition in a modular connectionist architecture: the what and where vision task. Cognitive Science 15, 219–250 (1991)CrossRefGoogle Scholar
  7. 7.
    Liu, Y., Yao, X.: Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics 29(6), 716–725 (1999)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Yong Liu
    • 1
  1. 1.The University of AizuAizu-WakamatsuJapan

Personalised recommendations