Abstract
The results of the application of deep neural predictors depend on a multitude of factors which compose the experimental settings. We report all the specific information to ensure the reproducibility of a wide number of numerical experiments. A sensitivity analysis on some critical aspects is provided in order to prove the robustness of our setting. Considering the long-term behavior of the predictors, those trained for the one-step forecasting are able to reproduce the statistical properties of the attractor, i.e., the so-called attractor’s climate, whereas the multi-step ones are unsuitable for replicating these statistical properties but provide an accurate forecasting up to several Lyapunov times. Lastly, we provide some remarks on the training procedure of the different predictors and introduce some advanced neural architectures to give an overview of possible advantages/disadvantages with respect to those implemented in this study.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bai, S., Zico Kolter, J., & Koltun, V. (2018). An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv:1803.01271.
Bakker, R., et al. (2000). Learning chaotic attractors by neural networks. Neural Computation, 12.10, 2355–2383.
Bocquet, M., et al. (2020). Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization. Foundations of Data Science, 2.1, 55–80.
Brajard, J., et al. (2020). Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: A case study with the Lorenz 96 model. Journal of Computational Science, 44, 101171.
Cuéllar, M. P., Delgado, M., & Pegalajar, M. C. (2007). An application of non-linear programming to train recurrent neural networks in time series prediction problems. In Enterprise Information Systems VII (pp. 95–102). Springer.
Dercole, F., Sangiorgio, M., & Schmirander, Y. (2020). An empirical assessment of the universality of ANNs to predict oscillatory time series. IFAC-PapersOnLine, 53.2, 1255–1260.
Devlin, J., et al. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805.
Farina, M., & Piroddi, L. (2010). An iterative algorithm for simulation error based identification of polynomial input-output models using multi-step prediction. International Journal of Control, 83.7, 1442–1456.
Farina, M., & Piroddi, L. (2012). Identification of polynomial input/output recursive models with simulation error minimisation methods. International Journal of Systems Science, 43.2, 319–333.
Farina, M., & Piroddi, L. (2011). Simulation error minimization identification based on multi-stage prediction. International Journal of Adaptive Control and Signal Processing, 25.5, 389–406.
Farina, M., & Piroddi, L. (2008). Some convergence properties of multi-step prediction error identification criteria. In 2008 47th IEEE Conference on Decision and Control (pp. 756–761).
Galván, I.M., & Isasi, P. (2001). Multi-step learning rule for recurrent neural models: an application to time series forecasting. Neural Processing Letters, 13.2, 115–133.
Kumpati, S. N., Kannan, P. et al. (1990) Identification and control of dynamical systems using neural networks. IEEE Transactions on neural networks, 1.1, 4–27.
Laurent, T., & von Brecht, J. (2016). A recurrent neural network without chaos. arXiv:1612.06212.
LeCun, Y., Bengio, Y., et al. (1995). Convolutional networks for images, speech, and time series. In The handbook of brain theory and neural networks (Vol. 3361.10).
Li, Z., & Ravela, S. (2019). On neural learnability of chaotic dynamics. arXiv:1912.05081.
Menezes, J. M. P., Jr., & Barreto, G. A. (2008). Long-term time series prediction with the NARX network: An empirical evaluation. Neurocomputing, 71.16-18, 3335–3343.
Miller, J., & Hardt, M. (2018). Stable recurrent models. arXiv:1805.10369.
van den Oord, A., et al. (2016). Wavenet: A generative model for raw audio. arXiv:1609.03499.
Pancerasa, M., et al. (2018). Can advanced machine learning techniques help to reconstruct barn swallows’ long-distance migratory paths? In Artificial Intelligence International Conference. PremC. pp. 89–89.
Pancerasa, M. et al. (2019). Reconstruction of long-distance bird migration routes using advanced machine learning techniques on geolocator data. Journal of the Royal Society Interface 16.155, 20190031.
Pathak, J., et al. (2017). Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 27.12, 121102.
Piroddi, L., & Spinelli, W. (2003). An identification algorithm for polynomial NARX models based on simulation error minimization. International Journal of Control, 76.17, 1767–1781.
Ribeiro, A. H., & Aguirre, L. A. (2018). Parallel training considered harmful?: Comparing series-parallel and parallel feedforward network training. Neurocomputing, 316, 222–231.
Sangiorgio, M. (2021). Deep learning in multi-step forecasting of chaotic dynamics.. Ph.D. thesis. Department of Electronics, Information and Bioengineering, Politecnico di Milano.
Sangiorgio, M., & Dercole, F. (2020) Robustness of LSTM neural networks for multi-step forecasting of chaotic time series. Chaos, Solitons & Fractals, 139, 110045.
Vaswani, A. et al. (2017). Attention is all you need. Proceedings of the 31st Conference on Neural Information Processing Systems, 30, 5998–6008.
Werbos, P. J. (1990). Backpropagation through time: What it does and how to do it. In Proceedings of the IEEE (Vol. 78.10, pp. 1550–1560).
Wu, N., et al. (2020). Deep Transformer Models for Time Series Forecasting: The Influenza Prevalence Case. arXiv:2001.08317.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Sangiorgio, M., Dercole, F., Guariso, G. (2021). Neural Predictors’ Sensitivity and Robustness. In: Deep Learning in Multi-step Prediction of Chaotic Dynamics. SpringerBriefs in Applied Sciences and Technology(). Springer, Cham. https://doi.org/10.1007/978-3-030-94482-7_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-94482-7_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-94481-0
Online ISBN: 978-3-030-94482-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)