An online learning neural network ensembles with random weights for regression of sequential data stream
- 296 Downloads
An ensemble of neural networks has been proved to be an effective machine learning framework. However, very limited studies in the current literature examined the neural network ensemble for online regression; furthermore, these methods were combination of online individual models and did not consider the ensemble diversity. In this paper, a novel online sequential learning algorithm for neural network ensembles for online regression is proposed. The algorithm is built upon the decorrelated neural network ensembles (DNNE) and thus referred to as Online-DNNE; so it uses single-hidden layer feed-forward neural networks with random hidden nodes’ parameters as ensemble components and introduces negative correlation learning to train base models simultaneously in a cooperative manner which can effectively maintain the ensemble diversity. The Online-DNNE only learns the newly arrived data, and the computation complexity is thus reduced. The results of the experiments with benchmarks show the effectiveness and significant advantages of the proposed approach.
KeywordsDecorrelated neural network Negative correlation learning Neural network ensembles Online sequential learning algorithm
This work is partly supported by the NSFC Project under Grants 61273031, 61525302, 61590922 and 71571156, and the Science Foundation of Liaoning Province Project under Grant 2014020021. Junwei Wang has received the Seed Funding Programme for basic research from University of Hong Kong (201409159015), and supported by the open project funded by State Key Laboratory of Synthetical Automation for Process Industries (PAL-N201505).
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
This article does not contain any studies with human participants or animals performed by any of the authors.
- Cheng W, Ding J, Kong W et al (2011) An adaptive chaotic PSO for parameter optimization and feature extraction of LS–SVM based modelling. In: American control conference (ACC), San Francisco, CA, pp 3263–3268Google Scholar
- Grabner MNH, Bischof H (2006) On-line boosting and vision. In: Computer vision and pattern recognition, 2006 IEEE Computer Society Conference on, vol 1. IEEE, pp 260–267Google Scholar
- Huang GB, Babri HA (1998) Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions. IEEE Trans Neural Netw 9(1):224–229Google Scholar
- Husmeier D, Taylor JG (1998) Predicting conditional probability densities with the Gaussian mixture-RVFL network. In: C Artificial neural nets and genetic algorithms. Springer, Vienna, pp 477–481Google Scholar
- Liang NY, Huang GB, Saratchandran P et al (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423Google Scholar
- Liu Y, Yao X (1997) Negatively correlated neural networks can produce best ensembles. Aust J Intell Inf Process Syst 4(3/4):176–185Google Scholar
- Liu C, Ding J, Jiang B, Chai T (2014) Adaptive online support vector regression prediction model for concentrate grade of the ore-dressing process. Control Theory Appl 31(3):386–391Google Scholar
- Oza NC (2005) Online bagging and boosting. In: IEEE international conference on systems, man and cybernetics, vol 3. IEEE, pp 2340–2345Google Scholar
- Schmidt WF, Kraaijveld MA, Duin RPW (1992) Feed forward neural networks with random weights. In: Proceedings of 11th IAPR international conference on pattern recognition methodology and systems, pp 1–4Google Scholar