Aggregating Regressive Estimators: Gradient-Based Neural Network Ensemble

  • Jiang Meng
  • Kun An
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4293)


A gradient-based algorithm for ensemble weights modification is presented and applied on the regression tasks. Simulation results show that this method can produce an estimator ensemble with better generalization than those of bagging and single neural network. The method can not only have a similar function to GASEN of selecting many subnets from all trained networks, but also be of better performance than GASEN, bagging and best individual of regressive estimators.


Regression Problem Good Generalization Multivariate Adaptive Regression Spline Regression Task Neural Network Ensemble 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Optiz, D., Shavlik, J.: Actively Searching For an Effectively Neural Network Ensemble. Connection Science 8(3-4), 337–353 (1996)CrossRefGoogle Scholar
  2. 2.
    Hanson, L.K., Salamon, P.: Neural Network Ensemble. IEEE Trans. on Pattern Analysis and Machine Intelligence PAMA-12, 993–1002 (1990)CrossRefGoogle Scholar
  3. 3.
    Breiman, L.: Bagging Predictors. Machine Leaning 24, 123–140 (1996)MATHMathSciNetGoogle Scholar
  4. 4.
    Schapire, R.E.: The Strength of Weak Learn Ability. Machine Leaning 5, 197–227 (1990)Google Scholar
  5. 5.
    Freund, Y.: Boosting a Weak Algorithm by Majority. Information Computation 121, 256–285 (1995)MATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Freund, Y., Schapire, R.E.: A Decision-theoretic Generalization of On-Line Learning and an Application to Boosting. J. Computer and System Science 55, 119–139 (1997)MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Drucker, H.: Improving Regressors Using Boosting Techniques. In: Proc. of 14th International Conf. on Machine Learning, pp. 107–115. Morgan Kaufmann, Burlington, MA (1997)Google Scholar
  8. 8.
    Schapire, R.E., Singer, Y.: Improved Boosting Algorithms Using Confidence-rated Predictions. Machine Learning 37(3), 297–336 (1999)MATHCrossRefGoogle Scholar
  9. 9.
    Avnimilach, R., Intrator, N.: Boosted Mixture of Experts: An Ensemble Learning Scheme. Neural Computation 11, 483–497 (1999)CrossRefGoogle Scholar
  10. 10.
    Solomatine, D.P., Shrestha, D.L.: AdaBoost.RT: a boosting algorithm for regression problems. In: Proc. of 2004 IEEE International Joint Conf. on Neural Networks, vol. 2, pp. 1163–1168 (2004)Google Scholar
  11. 11.
    Islam, M., Yao, X., Murase, K.: A Constructive Algorithm for Training Cooperative Neural Network Ensembles. IEEE Trans. on Neural Networks 14(4), 820–834 (2003)CrossRefGoogle Scholar
  12. 12.
    Liu, Y., Yao, X.: Simultaneous Training of Negatively Correlated Neural Networks in an Ensemble. IEEE Trans. on System, Man and Cybernetics—PART B: Cybernetics 29(6), 716–725 (1999)CrossRefGoogle Scholar
  13. 13.
    Jang, M., Cho, S.: Observational Learning Algorithm for an Ensemble of Neural Networks. Pattern Analysis & Application 5, 154–167 (2002)MATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Perrone, M.P., Cooper, L.N.: When Networks Disagree: Ensemble Method for Neural Networks. In: Artificial Neural Networks for Speech and Vision, pp. 126–142. Chapman and Hall, New York (1993)Google Scholar
  15. 15.
    Zhou, Z.H., Wu, J.X., Tang, W.: Ensembling Neural Networks: Many Could Be Better Than All. Artificial Intelligence 137, 239–263 (2002)MATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    Zhou, Z.H., Wu, J.X., Jiang, Y., et al.: Genetic Algorithm based Selective Neural Network Ensemble. In: Proc. 17th International Joint Conf. on Artificial Intelligence, Seattle, WA, vol. 2, pp. 797–802 (2001)Google Scholar
  17. 17.
    German, S., Bienenstock, E., Doursat, R.: Neural Networks And the Bias/Variance Dilemma. Neural Computation 4(1), 1–58 (1992)CrossRefGoogle Scholar
  18. 18.
    Kroph, A., Vedelsby, J.: Neural Network Ensemble, Cross Validation, and Active Learning. In: Advanced in Neural Information Processing System, vol. 7, pp. 231–238. MIT Press, Cambridge (1995)Google Scholar
  19. 19.
    Friedman, J.: Multivariate Adaptive Regression Splines. Annals of Statistics 19, 1–141 (1991)MATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Jiang Meng
    • 1
  • Kun An
    • 2
  1. 1.School of Mechanical Engineering and AutomatizationNorth University of China, TaiyuanShanxiChina
  2. 2.School of Information and Communication EngineeringNorth University of China, TaiyuanShanxiChina

Personalised recommendations