Skip to main content

Boosted Embeddings for Time-Series Forecasting

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Data Science (LOD 2021)

Abstract

Time-series forecasting is a fundamental task emerging from diverse data-driven applications. Many advanced autoregressive methods such as ARIMA were used to develop forecasting models. Recently, deep learning based methods such as DeepAR, NeuralProphet, and Seq2Seq have been explored for the time-series forecasting problem. In this paper, we propose a novel time-series forecast model, DeepGB. We formulate and implement a variant of gradient boosting wherein the weak learners are deep neural networks whose weights are incrementally found in a greedy manner over iterations. In particular, we develop a new embedding architecture that improves the performance of many deep learning models on time-series data using a gradient boosting variant. We demonstrate that our model outperforms existing comparable state-of-the-art methods using real-world sensor data and public data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Neural prophet. https://github.com/ourownstory/neural_prophet

  2. Wikipedia web traffic time series forecasting. https://www.kaggle.com/c/web-traffic-time-series-forecasting/

  3. Arat, M.M.: How to use embedding layer and other feature columns together in a network using keras? (2019). https://mmuratarat.github.io/2019-06-12/embeddings-with-numeric-variables-Keras

  4. Armstrong, J.S.: Long-range Forecasting. Wiley, Hoboken (1985)

    Google Scholar 

  5. Bahdanau, D., Chorowski, J., Serdyuk, D., Brakel, P., Bengio, Y.: End-to-end attention-based large vocabulary speech recognition. In: ICASSP, pp. 4945–4949 (2016)

    Google Scholar 

  6. Box, G., Jenkins, G.M.: Time Series Analysis: Forecasting and Control. Holden-Day, San Francisco (1976)

    Google Scholar 

  7. Csáji, B.C.: Approximation with artificial neural networks. Fac. Sci. Eötvös Loránd Univ. Hungary 24, 7 (2001)

    Google Scholar 

  8. Dorogush, A.V., Ershov, V., Gulin, A.: Catboost: gradient boosting with categorical features support. arXiv:1810.11363 (2018)

  9. Friedman, J.: Stochastic gradient boosting. Comput. Stat. Data Anal. 38, 367–378 (2002)

    Article  MathSciNet  Google Scholar 

  10. Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting (With discussion and a rejoinder by the authors). Ann. Stat. 28(2), 337–407 (2000)

    Google Scholar 

  11. Fuleky, P. (ed.): Macroeconomic Forecasting in the Era of Big Data. ASTAE, vol. 52. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-31150-6

    Book  MATH  Google Scholar 

  12. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. The MIT Press, Cambridge (2016)

    Google Scholar 

  13. Hansen, L.K., Salamon, P.: Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12, 993–1001 (1990)

    Google Scholar 

  14. Harvey, A., Peters, S.: Estimation procedures for structural time series models. J. Forecasting. 9, 89–108 (1990)

    Article  Google Scholar 

  15. Hastie, T., Tibshirani, R., Friedman, J.: Boosting and additive trees. In: The Elements of Statistical Learning, pp. 337–387. Springer, New York (2009). https://doi.org/10.1007/978-0-387-21606-5_10

  16. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  17. Hewamalage, H., Bergmeir, C., Bandara, K.: Recurrent neural networks for time series forecasting: Current status and future directions. Int. J. Forecast. 37(1), 388–427 (2021)

    Article  Google Scholar 

  18. Hwang, S., Jeon, G., Jeong, J., Lee, J.: A novel time series based seq2seq model for temperature prediction in firing furnace process. Procedia Comput. Sci. 155, 19–26 (2019)

    Article  Google Scholar 

  19. Hyndman, R., Athanasopoulos, G.: Forecasting: Principles and Practice, 3rd edn. OTexts, Australia (2021)

    Google Scholar 

  20. Karevan, Z., Suykens, J.A.: Transductive LSTM for time-series prediction: an application to weather forecasting. Neural Netw. 125, 1–9 (2020)

    Article  Google Scholar 

  21. Kaushik, S., et al.: AI in healthcare: time-series forecasting using statistical, neural, and ensemble architectures. Front. Big Data 3, 4 (2020)

    Article  Google Scholar 

  22. Kazemi, S.M., et al.: Time2vec: learning a vector representation of time (2019)

    Google Scholar 

  23. Ke, G., et al.: LIGHTGBM: a highly efficient gradient boosting decision tree. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4–9, December 2017, Long Beach, CA, USA, pp. 3146–3154 (2017)

    Google Scholar 

  24. Lin, Z., et al.: A structured self-attentive sentence embedding. arXiv:1703.03130 (2017)

  25. Makridakis, S., Spiliotis, E., Assimakopoulos, V.: The m5 accuracy competition: results, findings and conclusions. Int. J. Forecast. (2020)

    Google Scholar 

  26. Makridakis, S., Hibon, M.: The m3-competition: results, conclusions and implications. Int. J. Forecast. 16(4), 451–476 (2000)

    Google Scholar 

  27. Nitanda, A., Suzuki, T.: Functional gradient boosting based on residual network perception. In: International Conference on Machine Learning, pp. 3819–3828. PMLR (2018)

    Google Scholar 

  28. Opitz, D.W., Shavlik, J.W.: Actively searching for an effective neural network ensemble. Connection Sci. 8, 337–354 (1996)

    Google Scholar 

  29. Perrone, M.P., Cooper, L.N.: When networks disagree: ensemble methods for hybrid neural networks. Brown University Institute for Brain and Neural Systems, Tech. rep. (1992)

    Google Scholar 

  30. Salinas, D., Flunkert, V., Gasthaus, J., Januschowski, T.: DeepAR: probabilistic forecasting with autoregressive recurrent networks. Int. J. Forecast. 36(3), 1181–1191 (2020)

    Article  Google Scholar 

  31. Chen, S., Billings, S.A.: Representations of non-linear systems: the NARMAX model. Int. J. Control 49(3), 1013–1032 (1989)

    Article  Google Scholar 

  32. Seabold, S., Perktold, J.: Statsmodels: econometric and statistical modeling with Python. In: 9th Python in Science Conference (2010)

    Google Scholar 

  33. Shumway, R., Stoffer, D.: Time series analysis and its applications with R examples, vol. 9, January 2011. https://doi.org/10.1007/978-1-4419-7865-3

  34. Taylor, S.J., Letham, B.: Forecasting at scale. Am. Stat. 72(1), 37–45 (2018)

    Article  MathSciNet  Google Scholar 

  35. Team, T.: Introducing tensorflow feature columns (2017). https://developers.googleblog.com/2017/11/introducing-tensorflow-feature-columns.html

  36. Timmermann, A.: Forecasting methods in finance. Ann. Rev. Financ. Econ. 10, 449–479 (2018)

    Article  Google Scholar 

  37. Triebe, O., Laptev, N., Rajagopal, R.: AR-Net: a simple auto-regressive neural network for time-series. arXiv:1911.12436 (2019)

  38. Veit, A., Wilber, M.J., Belongie, S.: Residual networks behave like ensembles of relatively shallow networks. NIPS 29, 550–558 (2016)

    Google Scholar 

  39. Wen, Y., Wang, J., Chen, T., Zhang, W.: Cat2vec: learning distributed representation of multi-field categorical data (2016). http://openreview.net/pdf?id=HyNxRZ9xg

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Karingula, S.R. et al. (2022). Boosted Embeddings for Time-Series Forecasting. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2021. Lecture Notes in Computer Science(), vol 13164. Springer, Cham. https://doi.org/10.1007/978-3-030-95470-3_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-95470-3_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-95469-7

  • Online ISBN: 978-3-030-95470-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics