Advertisement

A Multivariate Time Series Classification Method Based on Self-attention

  • Huiwei Lin
  • Yunming YeEmail author
  • Ka-Cheong Leung
  • Bowen Zhang
Conference paper
  • 25 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1107)

Abstract

Multivariate Time Series Classification (MTSC) is believed to be a crucial task towards dynamic process recognition and has been widely studied. Recent years, end-to-end MTSC with Convolutional Neural Network (CNN) has gained increasing attention thanks to its ability to integrates local features. However, it remains a significant challenge for CNN to handle global information and long-range dependencies of time series. In this paper, we present a simple and feasible architecture for MTSC to address these problems. Our model benefits from self-attention, which can help CNN directly capture the relationships of time series between two random time steps or variables. Experimental results of the proposed model work on thirty five complex MTSC tasks show its effectiveness and universality that has to outperform existing state-of-the-art (SOTA) model overall. Besides, our model is computationally efficient, and the parsing speed is six hours faster than the current model.

Keywords

Multivariate time series classification Temporal Convolutional Network Self-attention 

References

  1. 1.
    Bagnall, A., Lines, J., Bostrom, A., Large, J., Keogh, E.: The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data Min. Knowl. Discov. 31(3), 606–660 (2017)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Ronao, C.A., Cho, S.B.: Human activity recognition with smartphone sensors using deep learning neural networks. Expert Syst. Appl. 59, 235–244 (2016)CrossRefGoogle Scholar
  3. 3.
    Ye, L., Keogh, E.: Time series shapelets: a novel technique that allows accurate, interpretable and fast classification. Data Min. Knowl. Discov. 22(1–2), 149–182 (2011)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Schäfer, P.: The boss is concerned with time series classification in the presence of noise. Data Min. Knowl. Discov. 29(6), 1505–1530 (2015)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Failure prediction of concrete piston for concrete pump vehicles. https://www.datafountain.cn/competitions/336
  6. 6.
    Zheng, Y., Liu, Q., Chen, E., Ge, Y., Zhao, J.L.: Time series classification using multi-channels deep convolutional neural networks. In: International Conference on Web-Age Information Management, pp. 298–310. Springer (2014)Google Scholar
  7. 7.
    Lipton, Z.C., Kale, D.C., Elkan, C., Wetzel, R.: Learning to diagnose with LSTM recurrent neural networks. arXiv preprint arXiv:1511.03677 (2015)
  8. 8.
    Wang, K., He, J., Zhang, L.: Attention-based convolutional neural network for weakly labeled human activities recognition with wearable sensors. IEEE Sens. J. 19, 7598–7604 (2019)CrossRefGoogle Scholar
  9. 9.
    Bai, S., Kolter, J.Z., Koltun, V.: An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271 (2018)
  10. 10.
    Yang, J., Nguyen, M.N., San, P.P., Li, X.L., Krishnaswamy, S.: Deep convolutional neural networks on multichannel time series for human activity recognition. In: Twenty-Fourth International Joint Conference on Artificial Intelligence (2015)Google Scholar
  11. 11.
    Lee, S.M., Yoon, S.M., Cho, H.: Human activity recognition from accelerometer data using convolutional neural network. In: 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 131–134. IEEE (2017)Google Scholar
  12. 12.
    Devineau, G., Moutarde, F., Xi, W., Yang, J.: Deep learning for hand gesture recognition on skeletal data. In: 2018 13th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2018), pp. 106–113. IEEE (2018)Google Scholar
  13. 13.
    Karim, F., Majumdar, S., Darabi, H., Harford, S.: Multivariate lstm-fcns for time series classification. Neural Netw. 116, 237–245 (2019)CrossRefGoogle Scholar
  14. 14.
    Fawaz, H.I., Forestier, G., Weber, J., Idoumghar, L., Muller, P.A.: Deep learning for time series classification: a review. Data Min. Knowl. Discov. 33(4), 917–963 (2019)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)Google Scholar
  16. 16.
    Tan, Z., Wang, M., Xie, J., Chen, Y., Shi, X.: Deep semantic role labeling with self-attention. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)Google Scholar
  17. 17.
    Verga, P., Strubell, E., McCallum, A.: Simultaneously self-attending to all mentions for full-abstract biological relation extraction. arXiv preprint arXiv:1802.10569 (2018)
  18. 18.
    Multivariate time series dataset archive for LSTM-FCNs. https://github.com/titu1994/MLSTM-FCN/releases

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Huiwei Lin
    • 1
  • Yunming Ye
    • 1
    Email author
  • Ka-Cheong Leung
    • 1
  • Bowen Zhang
    • 2
  1. 1.School of Computer Science and TechnologyHarbin Institute of Technology, ShenZhenShenZhenChina
  2. 2.School of Computer Science and TechnologyHarbin Institute of TechnologyHarbinChina

Personalised recommendations