Skip to main content
Log in

Design of sparse Bayesian echo state network for time series prediction

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Echo state network (ESN) refers to a novel recurrent neural network with a largely and randomly generated reservoir and a trainable output layer, which has been utilized in the time series prediction. In spite of that, since the output weights are computed by the simple linear regression, there may be an ill-posed problem in the training process for ESN. In order to tackle this issue, a sparse Bayesian ESN (SBESN) is given. The proposed SBESN attempts to estimate the probability of the outputs and trains the network through sparse Bayesian learning, where independent regularization priors should be implied to each weight rather than sharing one prior for all weights. Simulation results illustrate that the SBESN model is insensitivity to reservoir size and completely outperforms other models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Li J, Dai Q, Ye R (2019) A novel double incremental learning algorithm for time series prediction. Neural Comput Appl 31(10):6055–6077

    Google Scholar 

  2. Mohapatra UM, Majhi B, Satapathy SC (2019) Financial time series prediction using distributed machine learning techniques. Neural Comput Appl 31(8):3369–3384

    Google Scholar 

  3. Zhang W, Xu A, Ping D (2019) An improved kernel-based incremental extreme learning machine with fixed budget for nonstationary time series prediction. Neural Comput Appl 31(3):637–652

    Google Scholar 

  4. Ak R, Fink O, Zio E (2016) Two machine learning approaches for short-term wind speed time-series prediction. IEEE Trans Neural Netw Learn Syst 27:1734–1747

    MathSciNet  Google Scholar 

  5. Chandra R, Zhang M (2015) Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time-series prediction. IEEE Trans Neural Netw Learn Syst 26:3123–3136

    MathSciNet  Google Scholar 

  6. Chen DW (2017) Research on traffic flow prediction in the big data environment based on the improved RBF neural network. IEEE Trans Ind Informat 13:2000–2008

    Google Scholar 

  7. Han HG, Lin ZL, Qiao JF (2017) Modeling of nonlinear systems using the self -organizing fuzzy neural network with adaptive gradient algorithm. Neurocomputing 266:566–578

    Google Scholar 

  8. Miranian A, Abdollahzade M (2013) Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction. IEEE Trans Neural Netw Learn Syst 24:207–218

    Google Scholar 

  9. Jaeger H, Haas H (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304:78–80

    Google Scholar 

  10. Yao XS, Wang ZS, Zhang HG (2019) Prediction and identification of discrete-time dynamic nonlinear systems based on adaptive echo state network. Neural Netw 113:11–19

    MATH  Google Scholar 

  11. Han M, Xu ML (2018) Laplacian echo state network for multivariate time series prediction. IEEE Trans Neural Netw Learn Syst 29:238–244

    MathSciNet  Google Scholar 

  12. Qiao JF, Li FJ, Han HG, Li WJ (2017) Growing echo-state network with multiple subreservoirs. IEEE Trans Neural Netw Learn Syst 28:391–404

    Google Scholar 

  13. Yang CL, Qiao JF, Wang L, Zhu XX (2019) Dynamical regularized echo state network for time series prediction. Neural Comput Appl 31:6781–6794

    Google Scholar 

  14. Ma QL, Chen EH, Lin ZX, Yan JY (2019) Convolutional multitimescale echo state network. IEEE Trans Cybern. https://doi.org/10.1109/tcyb.2019.2919648

    Article  Google Scholar 

  15. Badoni M, Singh B, Singh A (2017) Implementation of echo-state network based control for power quality improvement. IEEE Trans Ind Electron 64:5576–5584

    Google Scholar 

  16. Chen Q, Shi L, Na J, Ren XM (2018) Adaptive echo state network control for a class of pure-feedback systems with input and output constraints. Neurocomputing 275:1370–1382

    Google Scholar 

  17. Duan HB, Wang XH (2016) Echo state networks with orthogonal pigeon-inspired optimization for image restoration. IEEE Trans Neural Netw Learn Syst 27:2413–2425

    MathSciNet  Google Scholar 

  18. Dutoit X, Schrauwen B, Van Campenhout J, Stroobandt D, Van Brussel H, Nuttin M (2009) Pruning and regularization in reservoir computing. Neurocomputing 72:1534–1546

    Google Scholar 

  19. Reinhart RF, Steil JJ (2012) Regularization and stability in reservoir networks with output feedback. Neurocomputing 90:96–105

    Google Scholar 

  20. Bishop CM (1995) Training with noise is equivalent to Tikhonov regularization. Neural Comput 7:108–116

    Google Scholar 

  21. Qiao JF, Wang L, Yang CL (2019) Adaptive lasso echo state network based on modified Bayesian information criterion for nonlinear system modeling. Neural Comput Appl 31:6163–6177

    Google Scholar 

  22. Xu ML, Han M (2016) Adaptive elastic echo state network for multivariate time series prediction. IEEE Trans Cybern 46:2173–2183

    Google Scholar 

  23. Chatzis SP, Demiris Y (2011) Echo state Gaussian process. IEEE Trans Neural Netw 22:1435–1445

    Google Scholar 

  24. Li DC, Han M, Wang J (2012) Chaotic time series prediction based on a novel robust echo state network. IEEE Trans Neural Netw Learn Syst 23:787–799

    Google Scholar 

  25. Huang BB, Qin G, Zhao R, Wu Q (2018) Recursive Bayesian echo state network with an adaptive inflation factor for temperature prediction. Neural Comput Appl 29:1535–1543

    Google Scholar 

  26. Liu Y, Liu QL, Wang W, Zhao J, Leung H (2012) Data-driven based model for flow prediction of steam system in steel industry. Inf Sci 193:104–114

    Google Scholar 

  27. Shen LH, Chen JH, Zeng ZG, Yang JZ (2018) A novel echo state network for multivariate and nonlinear time series prediction. Appl Soft Comput 62:524–535

    Google Scholar 

  28. Shutin D, Zechner C, Kulkarni SR (2012) Regularized variational bayesian learning of echo state networks with delay & sum readout. Neural Comput 24:967–995

    MathSciNet  MATH  Google Scholar 

  29. Soria-Olivas E, Gomez-Sanchis J, Martin JD (2011) BELM: Bayesian extreme learning machine. IEEE Trans Neural Netw 22:505–509

    Google Scholar 

  30. Luo JH, Vong CM, Wong PK (2014) Sparse Bayesian extreme learning machine for multi-classification. IEEE Trans Neural Netw Learn Syst 25:836–843

    Google Scholar 

  31. Wipf DP, Rao BD (2004) Sparse Bayesian learning for basis selection. IEEE Trans Signal Process 52:2153–2164

    MathSciNet  MATH  Google Scholar 

  32. Wong KI, Vong CM, Wong PK, Luo JH (2015) Sparse Bayesian extreme learning machine and its application to biofuel engine performance prediction. Neurocomputing 149:397–404

    Google Scholar 

  33. Tipping ME (2001) Sparse Bayesian learning and the relevance vector machine. J Mach Learn Res 1:211–244

    MathSciNet  MATH  Google Scholar 

  34. Tipping ME (2004) Bayesian inference: an introduction to principles and practice in machine learning. Lect Notes Comput Sci 3176:41–62

    MATH  Google Scholar 

  35. Jaeger H (2001) The ‘echo state’ approach to analysing and training recurrent neural networks-with an erratum note. German Nat Res Center Inf Technol, Bonn, Germany, Tech Rep 148

  36. Lun SX, Yao XS, Hu HF (2016) A new echo state network with variable memory length. Inf Sci 370–371:103–119

    MATH  Google Scholar 

  37. Buehner M, Young P (2006) A tighter bound for the echo state property. IEEE Trans Neural Netw 17(3):820–824

    Google Scholar 

  38. Yildiz IB, Jaeger H, Kiebel SJ (2012) Re-visiting the echo state property. Neural Netw 35:1–9

    MATH  Google Scholar 

  39. Gallicchio C, Micheli A (2011) Architectural and Markovian factors of echo state networks. Neural Netw 24(5):440–456

    Google Scholar 

  40. Wang HS, Yan XF (2014) Improved simple deterministically constructed cycle reservoir network with sensitive iterative pruning algorithm. Neurocomputing 145:353–362

    Google Scholar 

  41. Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc B 58:267–288

    MathSciNet  MATH  Google Scholar 

  42. Lorenz EN (1963) Deterministic nonperiodic flow. J Atmos Sci 20:130–141

    MathSciNet  MATH  Google Scholar 

  43. Rodan A, Tino P (2010) Minimum complexity echo state network. IEEE Trans Neural Netw 22:131–144

    Google Scholar 

  44. Han HG, Liu Z, Hou Y, Qiao JF (2020) Data-driven multiobjective predictive control for wastewater treatment process. IEEE Trans Ind Inform 16(4):2767–2775

    Google Scholar 

  45. Yang CL, Qiao JF, Han HG, Wang L (2018) Design of polynomial echo state networks for time series prediction. Neurocomputing 290:148–160

    Google Scholar 

  46. Yang CL, Qiao JF, Ahmad Z, Nie KZ, Wang L (2019) Online sequential echo state network with sparse RLS algorithm for time series prediction. Neural Netw 118:32–42

    MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the National Key Research and Development Program of China under Grants 2020YFC1511702, and the National Natural Science Foundation of China under Grants 61771059.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Wang.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, L., Su, Z., Qiao, J. et al. Design of sparse Bayesian echo state network for time series prediction. Neural Comput & Applic 33, 7089–7102 (2021). https://doi.org/10.1007/s00521-020-05477-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-05477-3

Keywords

Navigation