Advertisement

Fourier-Based Parametrization of Convolutional Neural Networks for Robust Time Series Forecasting

  • Sascha KrstanovicEmail author
  • Heiko PaulheimEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11828)

Abstract

Classical statistical models for time series forecasting most often make a number of assumptions about the data at hand, therewith, requiring intensive manual preprocessing steps prior to modeling. As a consequence, it is very challenging to come up with a more generic forecasting framework. Extensive hyperparameter optimization and ensemble architectures are common strategies to tackle this problem, however, this comes at the cost of high computational complexity. Instead of optimizing hyperparameters by training multiple models, we propose a method to estimate optimal hyperparameters directly from the characteristics of the time series at hand. To that end, we use Convolutional Neural Networks (CNNs) for time series forecasting and determine a part of the network layout based on the time series’ Fourier coefficients. Our approach significantly reduces the amount of required model configuration time and shows competitive performance on time series data across various domains. A comparison to popular, state of the art forecasting algorithms reveals further improvements in runtime and practicability.

Keywords

Time series forecasting Neural networks Fourier analysis 

References

  1. 1.
    Adhikari, R.: A neural network based linear ensemble framework for time series forecasting. Neurocomputing 157, 231–242 (2015)CrossRefGoogle Scholar
  2. 2.
    Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)CrossRefGoogle Scholar
  3. 3.
    Gers, F.A., Eck, D., Schmidhuber, J.: Neural nets WIRN vietri-01. perspectives in neural computing. In: Tagliaferri, R., Marinaro, M. (eds.) Applying LSTM to Time Series Predictable Through Time-Window Approaches. Perspectives in Neural Computing, pp. 193–200. Springer, London (2002)Google Scholar
  4. 4.
    Hamilton, J.D.: Time Series Analysis, vol. 2. Princeton University Press Princeton, Princeton (1994)zbMATHGoogle Scholar
  5. 5.
    He, Z., Gao, S., Xiao, L., Liu, D., He, H., Barber, D.: Wider and deeper, cheaper and faster: tensorized LSTMs for sequence learning. In: Advances in Neural Information Processing Systems, pp. 1–11 (2017)Google Scholar
  6. 6.
    Hipel, K.W., Ian McLeod, A.: Time Series Modelling of Water Resources and Environmental Systems, vol. 45. Elsevier, Amsterdam (1994)CrossRefGoogle Scholar
  7. 7.
    Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)Google Scholar
  8. 8.
    Diederik, P.: Kingma and Jimmy Ba. Adam. A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  9. 9.
    Marcellino, M., Stock, J.H., Watson, M.W.: A comparison of direct and iterated multistep AR methods for forecasting macroeconomic time series. J. Econom. 135(1–2), 499–526 (2006)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  11. 11.
    Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2(5), 359–366 (1989)CrossRefGoogle Scholar
  12. 12.
    Huang, G., Li, Y., Pleiss, G., Li, Z., Hopcroft, J., Weinberger, K.: Snapshot ensembles: Train 1 Get M for free. In: Proceedings of the International Conference on Learning Representations (ICLR 2017)Google Scholar
  13. 13.
    Krstanovic, S., Paulheim, H.: Stacked LSTM snapshot ensembles for time series forecasting. In: Proceedings of ITISE 2018, International Conference on Time Series and Forecasting, Godel (2018)Google Scholar
  14. 14.
    Längkvist, M., Karlsson, L., Loutfi, A.: A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recogn. Lett. 42(2014), 11–24 (2014)CrossRefGoogle Scholar
  15. 15.
    Harvey, D., Leybourne, S., Newbold, P.: Testing the equality of prediction mean squared errors. Int. J. Forecast. 13(2), 281–291 (1997)CrossRefGoogle Scholar
  16. 16.
    Lichman, M.: 2013. UCI Machine Learning Repository (2013). http://archive.ics.uci.edu/ml
  17. 17.
    Malhotra, P., Vig, L., Shroff, G., Agarwal, P.: Long short term memory networks for anomaly detection in time series. In: Proceedings, vol. 89. Presses universitaires de Louvain (2015)Google Scholar
  18. 18.
    Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. International Conference on Machine Learning, pp. 1310–1318 (2013)Google Scholar
  19. 19.
    Sharma, D., Issac, B., Raghava, G.P.S., Ramaswamy, R.: Spectral Repeat Finder (SRF): identification of repetitive sequences using Fourier transformation. Bioinformatics 20(9), 1405–1412 (2004)CrossRefGoogle Scholar
  20. 20.
    Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)Google Scholar
  21. 21.
    Welch, P.: Network model. Neurocomputing 50(2003), 159–175 (1967). Neurocomputing 50, 159–175 (2003)Google Scholar
  22. 22.
    Oliveira, M., Torgo, L.: Ensembles for time series forecasting. In: JMLR: Workshop and Conference Proceedings, vol. 39, pp. 360–370 (2014)Google Scholar
  23. 23.
    Cerqueira, Vítor, Torgo, Luís, Pinto, Fábio, Soares, Carlos: Arbitrated Ensemble for Time Series Forecasting. In: Ceci, Michelangelo, Hollmén, Jaakko, Todorovski, Ljupčo, Vens, Celine, Džeroski, Sašo (eds.) ECML PKDD 2017. LNCS (LNAI), vol. 10535, pp. 478–494. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-71246-8_29CrossRefGoogle Scholar
  24. 24.
    Hyndman, R.: Time series data library. https://datamarket.com/data/list/?q=provider:tsdl. Accessed 6 April 2019
  25. 25.
    Dua, D., Graff, C.: 2019. UUCI Machine Learning Repository. School of Information and Computer Science, University of California, Irvine, CA (2019). http://archive.ics.uci.edu/ml. Accessed 6 April 2019
  26. 26.
    Van Den Oord, A., et al.: WaveNet: a generative model for raw audio. In: SSW (2016)Google Scholar
  27. 27.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Data and Web Science GroupUniversity of MannheimMannheimGermany

Personalised recommendations