Skip to main content

Neural Approaches for Time Series Forecasting

  • Chapter
  • First Online:
Deep Learning in Multi-step Prediction of Chaotic Dynamics

Abstract

The problem of forecasting a time series with a neural network is well-defined when considering a single step-ahead prediction. The situation becomes more tangled in the prediction on a multiple-step horizon and consequently the task can be framed in different ways. For example, one can develop a single-step predictor to be used recursively along the forecasting horizon (recursive approach) or develop a multi-output model that directly forecasts the entire sequence of output values (multi-output approach). Additionally, the internal structure of each predictor may be constituted by a classical feed-forward (FF) or by a recurrent architecture, such as the long short-term memory (LSTM) nets. The latter are traditionally trained with the teacher forcing algorithm (LSTM-TF) to speed up the convergence of the optimization, or without it (LSTM-no-TF), in order to avoid the issue of exposure bias. Time series forecasting requires organizing the available data into input-output sequences for parameter training, hyperparameter tuning and performance testing. An additional developers’ choice explored in the chapter is the definition of the similarity index (error metric) that the training procedure must optimize and the other performance indicators that may be used to examine how well the prediction replicates test data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bengio, S., et al. (2015). Scheduled sampling for sequence prediction with recurrent neural networks. Proceedings of the 29th Conference on Neural Information Processing Systems, 28, 1171–1179.

    Google Scholar 

  2. Bontempi, G., Ben Taieb, S., & Le Borgne, Y.-A. (2012). Machine learning strategies for time series forecasting. In European business intelligence summer school (pp. 62–77). Springer.

    Google Scholar 

  3. Chollet, F., et al. (2018). Keras: The python deep learning library. Astro- physics Source Code Library.

    Google Scholar 

  4. Dercole, F., Sangiorgio, M., & Schmirander, Y. (2020). An empirical assessment of the universality of ANNs to predict oscillatory time series. IFAC-PapersOnLine, 53.2, 1255–1260.

    Google Scholar 

  5. Goodfellow, I., Bengio, Y., & Courville, A. (2015). Deep learning. MIT Press.

    Google Scholar 

  6. Guariso, G., Nunnari, G., & Sangiorgio, M. (2020). Multi-step solar irradiance forecasting and domain adaptation of deep neural networks. Energies, 13.15, 3987.

    Google Scholar 

  7. Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9.8, 1735–1780.

    Google Scholar 

  8. Kennel, M. B., Brown, R., & Abarbanel, H. D. (1992). Determining embedding dimension for phase-space reconstruction using a geometrical construction. Physical Review A, 45.6, 3403.

    Google Scholar 

  9. Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv:1412.6980.

  10. Makridakis, S., Spiliotis, E., & Assimakopoulos, V. (2018). Statistical and Machine Learning forecasting methods: Concerns and ways forward. PloS one, 13.3, e0194889.

    Google Scholar 

  11. McCuen, R. H., Knight, Z., & Cutter, G. (2006). Evaluation of the Nash-Sutcliffe efficiency index. Journal of Hydrologic Engineering, 11.6, 597–602.

    Google Scholar 

  12. Mihaylova, T., & Martins, A. F. T. (2019). Scheduled Sampling for Transformers. arXiv:1906.07651.

  13. Nash, J. E., & Sutcliffe, J. V. (1970). River flow forecasting through conceptual models part I-A discussion of principles. Journal of Hydrology, 10.3, pp. 282–290.

    Google Scholar 

  14. Pan, S., & Duraisamy, K. (2018). Long-time predictive modeling of nonlinear dynamical systems using neural networks. Complexity.

    Google Scholar 

  15. Paszke, A., et al. (2017). Automatic differentiation in PyTorch. In Proceedings of the Thirty-fifth Conference on Neural Information Processing Systems.

    Google Scholar 

  16. Pathak, J., et al. (2017). Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27.12, 121102.

    Google Scholar 

  17. Ranzato, M., et al. (2015). Sequence level training with recurrent neural networks. arXiv:1511.06732.

  18. Rasp, S., et al. (2020). WeatherBench: A benchmark dataset for data-driven weather forecasting. Journal of Advances in Modeling Earth Systems, 12.1.

    Google Scholar 

  19. Sangiorgio, M. (2021). Deep learning in multi-step forecasting of chaotic dynamics. Ph.D. thesis. Department of Electronics, Information and Bioengineering, Politecnico di Milano.

    Google Scholar 

  20. Sangiorgio, M., & Dercole, F. (2020). Robustness of LSTM neural networks for multi-step forecasting of chaotic time series. Chaos, Solitons and Fractals, 139, 110045.

    Google Scholar 

  21. Sangiorgio, M., Dercole, F., & Guariso, G. (2021). Forecasting of noisy chaotic systems with deep neural networks. Chaos, Solitons & Fractals, 153, 111570.

    Google Scholar 

  22. Sutskever, I., Vinyals, O., & Le, Q.V. (2014). Sequence to sequence learning with neural networks. Proceedings of the 28th Conference on Neural Information Processing Systems, 27, 3104–3112.

    Google Scholar 

  23. Takens, F. (1981). Detecting strange attractors in turbulence. Dynamical systems and turbulence, Warwick 1980 (pp. 366–381). Springer.

    Google Scholar 

  24. Wang, R., Kalnay, E., & Balachandran, B. (2019). Neural machine-based forecasting of chaotic dynamics. Nonlinear Dynamics, 98.4, 2903–2917.

    Google Scholar 

  25. Williams, R. J., & Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1.2, 270–280.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matteo Sangiorgio .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Sangiorgio, M., Dercole, F., Guariso, G. (2021). Neural Approaches for Time Series Forecasting. In: Deep Learning in Multi-step Prediction of Chaotic Dynamics. SpringerBriefs in Applied Sciences and Technology(). Springer, Cham. https://doi.org/10.1007/978-3-030-94482-7_4

Download citation

Publish with us

Policies and ethics