Abstract
When using LSTM networks to model time-series data, the standard approach is to segment the continuous data stream into fixed-size sequences and then independently feed each sequence to the LSTM network for training in a stateless fashion (i.e. in a fashion that resets the LSTM cell state per fixed-size sequence). As a result, long-term dependencies between patterns appearing in the data stream may be lost. In this work, we introduce a hybrid deep learning architecture that enables long-term inter-sequence modeling while maintaining focus on each sequence’s local characteristics. We use stateful LSTM training to model long-term dependencies that span the fixed-size sequences. We also utilize the attention mechanism to optimally learn each training sequence by focusing on the parts of each sequence that affect the classification outcome the most. Our experimental results show the advantages of each of these two mechanisms independently and in conjunction, compared to the standard stateless LSTM training approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
Candanedo, L.M., Feldheim, V.: Accurate occupancy detection of an office room from light, temperature, humidity and CO\(_2\) measurements using statistical learning models. Energy Build. 112, 12 (2015)
Cheng, J., Dong, L., Lapata, M.: Long short-term memory-networks for machine reading (2016)
De Jeses, O., Hagan, M.T.: Backpropagation through time for a general class of recurrent network. In: IJCNN 2001. International Joint Conference on Neural Networks. Proceedings (Cat. No. 01CH37222), vol. 4, pp. 2638–2643 (2001)
Dematos, G., Boyd, M.S., Kermanshahi, B., Kohzadi, N., Kaastra, I.: Feedforward versus recurrent neural networks for forecasting monthly Japanese yen exchange rates. Finan. Eng. Jpn. Markets 3, 59–75 (1996)
Gershenson, C.: Artificial neural networks for beginners (2003)
Graves, A., Jaitly, N., Mohamed, A.-R.: Hybrid speech recognition with deep bidirectional LSTM. In: 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 273–278. IEEE (2013)
Hewamalage, H., Bergmeir, C., Bandara, K.: Recurrent neural networks for time series forecasting: current status and future directions. Int. J. Forecast. 37(1), 388–427 (2020)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 56, 9:1735–9:1780 (1997)
Lin, Z., et al.: A structured self-attentive sentence embedding (2017)
Luong, M.-T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation (2015)
Masters, D., Luschi, C.: Revisiting small batch training for deep neural networks (2018)
Mauldin, T., Canby, M., Metsis, V., Ngu, A., Rivera, C.: Smartfall: a smartwatch-based fall detection system using deep learning. Sensors 18(10), 3363 (2018)
Mohajerin, N., Waslander, S.L.: State initialization for recurrent neural network modeling of time-series data. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2330–2337 (2017)
Moldovan, D., Anghel, I., Cioara, T., Salomie, I.: Time series features extraction versus LSTM for manufacturing processes performance prediction. In: 2019 International Conference on Speech Technology and Human-Computer Dialogue (SpeD), pp. 1–10 (2019)
Parikh, A.P., Täckström, O., Das, D., Uszkoreit, J.: A decomposable attention model for natural language inference (2016)
Paulus, R., Xiong, C., Socher, R.: A deep reinforced model for abstractive summarization (2017)
Qin, Y., Song, D., Chen, H., Cheng, W., Jiang, G., Cottrell, G.: A dual-stage attention-based recurrent neural network for time series prediction (2017)
Rahman, L., Mohammed, N., Al Azad, A.K.: A new LSTM model by introducing biological cell state. In: 2016 3rd International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), pp. 1–6 (2016)
Rivet, B., Souloumiac, A., Attina, V., Gibert, G.: xdawn algorithm to enhance evoked potentials: Application to brain-computer interface. IEEE Trans. Biomed. Eng. 56(8), 2035–2043 (2009)
Squartini, S., Paolinelli, S., Piazza, F.: Comparing different recurrent neural architectures on a specific task from vanishing gradient effect perspective. In: 2006 IEEE International Conference on Networking, Sensing and Control, pp. 380–385 (2006)
Struye, J., Latré, S.: Hierarchical temporal memory and recurrent neural networks for time series prediction: an empirical validation and reduction to multilayer perceptrons. Neurocomputing 04, 396 (2019)
Tang, H., Glass, J.: On training recurrent networks with truncated backpropagation through time in speech recognition (2018)
Tomiyama, S., Kitada, S., Tamura, H.: On a new recurrent neural network and learning algorithm using time series and steady-state characteristic. In IEEE SMC 1999 Conference Proceedings. 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028), vol. 1, pp. 478–483 (1999)
Vaswani, A., et al.: Attention is all you need (2017)
Vavoulas, G., Chatzaki, C., Malliotakis, T., Pediaditis, M., Tsiknakis, M.: The mobiact dataset: recognition of activities of daily living using smartphones. In: Proceedings of the International Conference on Information and Communication Technologies for Ageing Well and e-Health - Volume 1: ICT4AWE, (ICT4AGEINGWELL 2016), pp. 143–151. INSTICC. SciTePress (2016)
Wang, Y., Huang, M., Zhu, X., Zhao, L.: Attention-based LSTM for aspect-level sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 606–615. Association for Computational Linguistics, November 2016
Werbos, P.J.: Backpropagation through time: what it does and how to do it. Proc. IEEE 78(10), 1550–1560 (1990)
Xu, K., et al.: Show, attend and tell: Neural image caption generation with visual attention (2016)
Zeng, J., Ma, X., Zhou, K.: Enhancing attention-based LSTM with position context for aspect-level sentiment classification. IEEE Access 7, 20462–20471 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Katrompas, A., Metsis, V. (2022). Enhancing LSTM Models with Self-attention and Stateful Training. In: Arai, K. (eds) Intelligent Systems and Applications. IntelliSys 2021. Lecture Notes in Networks and Systems, vol 294. Springer, Cham. https://doi.org/10.1007/978-3-030-82193-7_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-82193-7_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-82192-0
Online ISBN: 978-3-030-82193-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)