Skip to main content

Stacked Denoising Auto-Encoders for Short-Term Time Series Forecasting

  • Conference paper
Artificial Neural Networks

Part of the book series: Springer Series in Bio-/Neuroinformatics ((SSBN,volume 4))

Abstract

In this chapter, a study of deep learning of time-series forecasting techniques is presented. Using Stacked Denoising Auto-Encoders, it is possible to disentangle complex characteristics in time series data. The effects of complete and partial fine-tuning are shown. SDAE prove to be able to train deeper models, and consequently to learn more complex characteristics in the data. Hence, these models are able to generalize better. Pre-trained models show a better generalization when used without covariates. The learned weights show to be sparse, suggesting future exploration and research lines.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bache, K., Lichman, M.: UCI machine learning repository (2013), http://archive.ics.uci.edu/ml

  2. Ben Taieb, S., Bontempi, G., Atiya, A., Sorjamaa, A.: A review and comparison of strategies for multi-step ahead time series forecasting based on the NN5 forecasting competition. Expert Systems with Applications (2012) (preprint)

    Google Scholar 

  3. Bengio, Y.: Deep learning of representations: looking forward. In: Dediu, A.-H., Martín-Vide, C., Mitkov, R., Truthe, B. (eds.) SLSP 2013. LNCS (LNAI), vol. 7978, pp. 1–37. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  4. Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012)

    MATH  MathSciNet  Google Scholar 

  5. Bergstra, J., Desjardins, G., Lamblin, P., Bengio, Y.: Quadratic polynomials learn better image features. Tech. Rep. 1337, Département d’Informatique et de Recherche Opérationnelle, Université de Montréal (2009)

    Google Scholar 

  6. Brockwell, P.J., Davis, R.A.: Introduction to Time Series and Forecasting, 2nd edn. Springer (2002)

    Google Scholar 

  7. Chao, J., Shen, F., Zhao, J.: Forecasting exchange rate with deep belief networks. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 1259–1266 (2011)

    Google Scholar 

  8. Cheng, H., Tan, P.-N., Gao, J., Scripps, J.: Multistep-ahead time series prediction. In: Ng, W.-K., Kitsuregawa, M., Li, J., Chang, K. (eds.) PAKDD 2006. LNCS (LNAI), vol. 3918, pp. 765–774. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  9. Collobert, R., Weston, J.: A unified architecture for natural language processing: Deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, ICML 2008, pp. 160–167 (2008), doi:10.1145/1390156.1390177

    Google Scholar 

  10. Erhan, D., Bengio, Y., Courville, A., Manzagol, P.A., Vincent, P., Bengio, S.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 11, 625–660 (2010)

    MATH  MathSciNet  Google Scholar 

  11. Erhan, D., Manzagol, P.A., Bengio, Y., Bengio, S., Vincent, P.: The difficulty of training deep architectures and the effect of unsupervised pre-training. Journal of Machine Learning Research 5, 153–160 (2009)

    Google Scholar 

  12. Escrivá-Escrivá, G., Álvarez-Bel, C., Roldán-Blay, C., Alcázar-Ortega, M.: New artificial neural network prediction method for electrical consumption forecasting based on building end-uses. Energy and Buildings 43(11), 3112–3119 (2011)

    Article  Google Scholar 

  13. Ferreira, P., Ruano, A., Silva, S., Conceição, E.: Neural networks based predictive control for thermal comfort and energy savings in public buildings. Energy and Buildings 55, 238–251 (2012)

    Article  Google Scholar 

  14. Gardner, J.: Exponential smoothing: The state of the art. Journal of Forecasting 4(1) (1985)

    Google Scholar 

  15. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Journal of Machine Learning Research 9, 249–256 (2010)

    Google Scholar 

  16. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: JMLR W&CP: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2011 (2011)

    Google Scholar 

  17. Goodfellow, I.J., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout Networks. ArXiv e-prints (2013)

    Google Scholar 

  18. Hinton, G., Salakhutdinov, R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  19. Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)

    Google Scholar 

  20. Kuremoto, T., Kimura, S., Kobayashi, K., Obayashi, M.: Time Series Forecasting Using Restricted Boltzmann Machine. In: ICIC, pp. 17–22 (2012)

    Google Scholar 

  21. Larochelle, H., Erhan, D., Courville, A., Bergstra, J., Bengio, Y.: An empirical evaluation of deep architectures on problems with many factors of variation. In: Proceedings of the 24th International Conference on Machine learning, ICML 2007, pp. 473–480. ACM, New York (2007), http://doi.acm.org/10.1145/1273496.1273556 , doi:10.1145/1273496.1273556

    Google Scholar 

  22. Le Cun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W., Jackel, L.D.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)

    Article  Google Scholar 

  23. Lee, H., Ekanadham, C., Ng, A.Y.: Sparse deep belief net model for visual area v2. In: NIPS (2007)

    Google Scholar 

  24. Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: Proceedings of the ICML (2013)

    Google Scholar 

  25. Olshausen, B.A., Fieldt, D.J.: Sparse coding with an overcomplete basis set: a strategy employed by v1. Vision Research 37, 3311–3325 (1997)

    Article  Google Scholar 

  26. Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: Explicit invariance during feature extraction. In: ICML, pp. 833–840 (2011)

    Google Scholar 

  27. Romeu, P., Zamora-Martínez, F., Botella-Rocamora, P., Pardo, J.: Time-Series Forecasting of Indoor Temperature Using Pre-trained Deep Neural Networks. In: Mladenov, V., Koprinkova-Hristova, P., Palm, G., Villa, A.E.P., Appollini, B., Kasabov, N. (eds.) ICANN 2013. LNCS, vol. 8131, pp. 451–458. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  28. Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C.: Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631–1642 (2013)

    Google Scholar 

  29. Taylor, J.: Exponential smoothing with a damped multiplicative trend. International Journal of Forecasting 19, 715–725 (2003)

    Article  Google Scholar 

  30. United States Department of Energy: Solar Decathlon Europe Competition (2012), http://www.solardecathlon.gov

  31. Utgoff, P.E., Stracuzzi, D.J.: Many-layered learning. Neural Comput. 14(10), 2497–2529 (2002), http://dx.doi.org/10.1162/08997660260293319 , doi:10.1162/08997660260293319

    Article  MATH  Google Scholar 

  32. Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)

    MATH  MathSciNet  Google Scholar 

  33. Zamora-Martínez, F., et al.: April-ANN toolkit, A Pattern Recognizer In Lua, Artificial Neural Networks module (2013), https://github.com/pakozm/april-ann

  34. Zamora-Martnez, F., Romeu, P., Botella-Rocamora, P., Pardo, J.: Towards Energy Efficiency: Forecasting Indoor Temperature via Multivariate Analysis. Energies 6(9), 4639–4659 (2013), http://www.mdpi.com/1996-1073/6/9/4639 , doi:10.3390/en6094639

    Article  Google Scholar 

  35. Zhang, G., Patuwo, B.E., Hu, M.Y.: Forecasting with artificial neural networks: The state of the art. International Journal of Forecasting 14(1), 35–62 (1998)

    Article  Google Scholar 

  36. Zou, W.Y., Socher, R., Cer, D.M., Manning, C.D.: Bilingual word embeddings for phrase-based machine translation. In: EMNLP, pp. 1393–1398 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pablo Romeu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Romeu, P., Zamora-Martínez, F., Botella-Rocamora, P., Pardo, J. (2015). Stacked Denoising Auto-Encoders for Short-Term Time Series Forecasting. In: Koprinkova-Hristova, P., Mladenov, V., Kasabov, N. (eds) Artificial Neural Networks. Springer Series in Bio-/Neuroinformatics, vol 4. Springer, Cham. https://doi.org/10.1007/978-3-319-09903-3_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-09903-3_23

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-09902-6

  • Online ISBN: 978-3-319-09903-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics