Soft Computing

, Volume 21, Issue 20, pp 5919–5937 | Cite as

An online learning neural network ensembles with random weights for regression of sequential data stream

  • Jinliang Ding
  • Haitao Wang
  • Chuanbao Li
  • Tianyou Chai
  • Junwei Wang
Focus

Abstract

An ensemble of neural networks has been proved to be an effective machine learning framework. However, very limited studies in the current literature examined the neural network ensemble for online regression; furthermore, these methods were combination of online individual models and did not consider the ensemble diversity. In this paper, a novel online sequential learning algorithm for neural network ensembles for online regression is proposed. The algorithm is built upon the decorrelated neural network ensembles (DNNE) and thus referred to as Online-DNNE; so it uses single-hidden layer feed-forward neural networks with random hidden nodes’ parameters as ensemble components and introduces negative correlation learning to train base models simultaneously in a cooperative manner which can effectively maintain the ensemble diversity. The Online-DNNE only learns the newly arrived data, and the computation complexity is thus reduced. The results of the experiments with benchmarks show the effectiveness and significant advantages of the proposed approach.

Keywords

Decorrelated neural network Negative correlation learning Neural network ensembles Online sequential learning algorithm 

References

  1. Alhamdoosh M, Wang D (2014) Fast decorrelated neural network ensembles with random weights. Inf Sci 264:104–117MathSciNetCrossRefMATHGoogle Scholar
  2. Boris I, Yoh H-P (1995) Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans Neural Netw 6(6):1320–1329CrossRefGoogle Scholar
  3. Bouziane H, Messabih B, Chouarfia A (2015) Effect of simple ensemble methods on protein secondary structure prediction. Soft Comput 19:1663–1678CrossRefGoogle Scholar
  4. Brown G, Wyatt JL, Tiňo P (2005) Managing diversity in regression ensembles. J Mach Learn Res 6:1621–1650MathSciNetMATHGoogle Scholar
  5. Bruce R (1996) Ensemble learning using decorrelated neural networks. Connect Sci 8:373–384CrossRefGoogle Scholar
  6. Brzezinski D, Stefanowski J (2014) Combining block-based and online methods in learning ensembles from concept drifting data streams. Inf Sci 265:50–67MathSciNetCrossRefMATHGoogle Scholar
  7. Cheng W, Ding J, Kong W et al (2011) An adaptive chaotic PSO for parameter optimization and feature extraction of LS–SVM based modelling. In: American control conference (ACC), San Francisco, CA, pp 3263–3268Google Scholar
  8. Grabner MNH, Bischof H (2006) On-line boosting and vision. In: Computer vision and pattern recognition, 2006 IEEE Computer Society Conference on, vol 1. IEEE, pp 260–267Google Scholar
  9. Hansen LK, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12(10):993–1001CrossRefGoogle Scholar
  10. Huang GB (2003) Learning capability and storage capacity of two-hidden-layer feedforward networks. IEEE Trans Neural Netw 14(2):274–281CrossRefGoogle Scholar
  11. Huang GB, Babri HA (1998) Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Trans Neural Netw 9(1):224–229Google Scholar
  12. Husmeier D, Taylor JG (1998) Predicting conditional probability densities with the Gaussian mixture-RVFL network. In: C Artificial neural nets and genetic algorithms. Springer, Vienna, pp 477–481Google Scholar
  13. Ikonomovska E, Gama J, Džeroski S (2015) Online tree-based ensembles and option trees for regression on evolving data streams. Neurocomputing 150:458–470CrossRefGoogle Scholar
  14. Lan Y, Soh YC, Huang GB (2009) Ensemble of online sequential extreme learning machine. Neurocomputing 72(13):3391–3395CrossRefGoogle Scholar
  15. Liang NY, Huang GB, Saratchandran P et al (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423Google Scholar
  16. Liu Y, Yao X (1997) Negatively correlated neural networks can produce best ensembles. Aust J Intell Inf Process Syst 4(3/4):176–185Google Scholar
  17. Liu Y, Yao X (1999) Ensemble learning via negative correlation. Neural Netw 12:1399–1404CrossRefGoogle Scholar
  18. Liu C, Ding J, Jiang B, Chai T (2014) Adaptive online support vector regression prediction model for concentrate grade of the ore-dressing process. Control Theory Appl 31(3):386–391Google Scholar
  19. Oza NC (2005) Online bagging and boosting. In: IEEE international conference on systems, man and cybernetics, vol 3. IEEE, pp 2340–2345Google Scholar
  20. Ramezani F, Nikoo M (2015) Artificial neural network weights optimization based on social-based algorithm to realize sediment over the river. Soft Comput 19:375–387CrossRefGoogle Scholar
  21. Schmidt WF, Kraaijveld MA, Duin RPW (1992) Feed forward neural networks with random weights. In: Proceedings of 11th IAPR international conference on pattern recognition methodology and systems, pp 1–4Google Scholar
  22. Stephan W, Susanne S et al (2015) Data-based prediction of sentiments using heterogeneous model ensembles. Soft Comput 19:3401–3412CrossRefGoogle Scholar
  23. Wang D, Alhamdoosh M (2013) Evolutionary extreme learning machine ensembles with size control. Neurocomputing 102:98–110CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  • Jinliang Ding
    • 1
  • Haitao Wang
    • 1
  • Chuanbao Li
    • 1
  • Tianyou Chai
    • 1
  • Junwei Wang
    • 2
  1. 1.State Key Laboratory of Synthetical Automation for Process IndustriesNortheastern UniversityShenyangPeople’s Republic of China
  2. 2.Department of Industrial and Manufacturing Systems EngineeringThe University of Hong KongHong KongPeople’s Republic of China

Personalised recommendations