Skip to main content

Neural Predictors’ Sensitivity and Robustness

  • Chapter
  • First Online:
Deep Learning in Multi-step Prediction of Chaotic Dynamics

Abstract

The results of the application of deep neural predictors depend on a multitude of factors which compose the experimental settings. We report all the specific information to ensure the reproducibility of a wide number of numerical experiments. A sensitivity analysis on some critical aspects is provided in order to prove the robustness of our setting. Considering the long-term behavior of the predictors, those trained for the one-step forecasting are able to reproduce the statistical properties of the attractor, i.e., the so-called attractor’s climate, whereas the multi-step ones are unsuitable for replicating these statistical properties but provide an accurate forecasting up to several Lyapunov times. Lastly, we provide some remarks on the training procedure of the different predictors and introduce some advanced neural architectures to give an overview of possible advantages/disadvantages with respect to those implemented in this study.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bai, S., Zico Kolter, J., & Koltun, V. (2018). An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv:1803.01271.

  2. Bakker, R., et al. (2000). Learning chaotic attractors by neural networks. Neural Computation, 12.10, 2355–2383.

    Google Scholar 

  3. Bocquet, M., et al. (2020). Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization. Foundations of Data Science, 2.1, 55–80.

    Google Scholar 

  4. Brajard, J., et al. (2020). Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: A case study with the Lorenz 96 model. Journal of Computational Science, 44, 101171.

    Google Scholar 

  5. Cuéllar, M. P., Delgado, M., & Pegalajar, M. C. (2007). An application of non-linear programming to train recurrent neural networks in time series prediction problems. In Enterprise Information Systems VII (pp. 95–102). Springer.

    Google Scholar 

  6. Dercole, F., Sangiorgio, M., & Schmirander, Y. (2020). An empirical assessment of the universality of ANNs to predict oscillatory time series. IFAC-PapersOnLine, 53.2, 1255–1260.

    Google Scholar 

  7. Devlin, J., et al. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805.

  8. Farina, M., & Piroddi, L. (2010). An iterative algorithm for simulation error based identification of polynomial input-output models using multi-step prediction. International Journal of Control, 83.7, 1442–1456.

    Google Scholar 

  9. Farina, M., & Piroddi, L. (2012). Identification of polynomial input/output recursive models with simulation error minimisation methods. International Journal of Systems Science, 43.2, 319–333.

    Google Scholar 

  10. Farina, M., & Piroddi, L. (2011). Simulation error minimization identification based on multi-stage prediction. International Journal of Adaptive Control and Signal Processing, 25.5, 389–406.

    Google Scholar 

  11. Farina, M., & Piroddi, L. (2008). Some convergence properties of multi-step prediction error identification criteria. In 2008 47th IEEE Conference on Decision and Control (pp. 756–761).

    Google Scholar 

  12. Galván, I.M., & Isasi, P. (2001). Multi-step learning rule for recurrent neural models: an application to time series forecasting. Neural Processing Letters, 13.2, 115–133.

    Google Scholar 

  13. Kumpati, S. N., Kannan, P. et al. (1990) Identification and control of dynamical systems using neural networks. IEEE Transactions on neural networks, 1.1, 4–27.

    Google Scholar 

  14. Laurent, T., & von Brecht, J. (2016). A recurrent neural network without chaos. arXiv:1612.06212.

  15. LeCun, Y., Bengio, Y., et al. (1995). Convolutional networks for images, speech, and time series. In The handbook of brain theory and neural networks (Vol. 3361.10).

    Google Scholar 

  16. Li, Z., & Ravela, S. (2019). On neural learnability of chaotic dynamics. arXiv:1912.05081.

  17. Menezes, J. M. P., Jr., & Barreto, G. A. (2008). Long-term time series prediction with the NARX network: An empirical evaluation. Neurocomputing, 71.16-18, 3335–3343.

    Google Scholar 

  18. Miller, J., & Hardt, M. (2018). Stable recurrent models. arXiv:1805.10369.

  19. van den Oord, A., et al. (2016). Wavenet: A generative model for raw audio. arXiv:1609.03499.

  20. Pancerasa, M., et al. (2018). Can advanced machine learning techniques help to reconstruct barn swallows’ long-distance migratory paths? In Artificial Intelligence International Conference. PremC. pp. 89–89.

    Google Scholar 

  21. Pancerasa, M. et al. (2019). Reconstruction of long-distance bird migration routes using advanced machine learning techniques on geolocator data. Journal of the Royal Society Interface 16.155, 20190031.

    Google Scholar 

  22. Pathak, J., et al. (2017). Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27.12, 121102.

    Google Scholar 

  23. Piroddi, L., & Spinelli, W. (2003). An identification algorithm for polynomial NARX models based on simulation error minimization. International Journal of Control, 76.17, 1767–1781.

    Google Scholar 

  24. Ribeiro, A. H., & Aguirre, L. A. (2018). Parallel training considered harmful?: Comparing series-parallel and parallel feedforward network training. Neurocomputing, 316, 222–231.

    Google Scholar 

  25. Sangiorgio, M. (2021). Deep learning in multi-step forecasting of chaotic dynamics.. Ph.D. thesis. Department of Electronics, Information and Bioengineering, Politecnico di Milano.

    Google Scholar 

  26. Sangiorgio, M., & Dercole, F. (2020) Robustness of LSTM neural networks for multi-step forecasting of chaotic time series. Chaos, Solitons & Fractals, 139, 110045.

    Google Scholar 

  27. Vaswani, A. et al. (2017). Attention is all you need. Proceedings of the 31st Conference on Neural Information Processing Systems, 30, 5998–6008.

    Google Scholar 

  28. Werbos, P. J. (1990). Backpropagation through time: What it does and how to do it. In Proceedings of the IEEE (Vol. 78.10, pp. 1550–1560).

    Google Scholar 

  29. Wu, N., et al. (2020). Deep Transformer Models for Time Series Forecasting: The Influenza Prevalence Case. arXiv:2001.08317.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matteo Sangiorgio .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Sangiorgio, M., Dercole, F., Guariso, G. (2021). Neural Predictors’ Sensitivity and Robustness. In: Deep Learning in Multi-step Prediction of Chaotic Dynamics. SpringerBriefs in Applied Sciences and Technology(). Springer, Cham. https://doi.org/10.1007/978-3-030-94482-7_6

Download citation

Publish with us

Policies and ethics