Advertisement

An Attention-Based CNN for ECG Classification

  • Alexander KuvaevEmail author
  • Roman Khudorozhkov
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 943)

Abstract

The paper considers the problem of improving the interpretability of a convolutional neural network on the example of ECG classification task. This is done by using an architecture based on attention modules. Each module generates a mask that selects only those features that are required to make the final prediction. By visualizing these masks, areas of the signal that are important for decision-making can be identified. The model was trained both on raw signals and on their logarithmic spectrograms. In the case of raw signals, generated masks did not perform any meaningful feature maps filtering, but in the case of spectrograms, interpretable masks responsible for noise reduction and arrhythmic parts detection were obtained.

Keywords

Convolutional neural networks Attention mechanism ECG classification 

References

  1. 1.
    Goldberger, A.L., Amaral, L.A.N., Glass, L., Hausdorff, J.M., Ivanov, P.C., Mark, R.G., Mietus, J.E., Moody, G.B., Peng, C.-K., Stanley, H.E.: PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation 101(23), e215–e220 (2000)CrossRefGoogle Scholar
  2. 2.
    Clifford, G., Liu, C., Moody, B., Lehman, L.H., Silva, I., Li, Q., Johnson, A., Mark, R.G.: AF classification from a short single lead ECG recording: the PhysioNet computing in cardiology challenge 2017. Comput. Cardiol. 44 (2017)Google Scholar
  3. 3.
    Moody, G.B., Mark, R.G.: A new method for detecting atrial fibrillation using R-R intervals. Comput. Cardiol. 10, 227–230 (1983)Google Scholar
  4. 4.
    Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., Wang, X., Tang, X.: Residual attention network for image classification. ArXiv e-prints, April 2017Google Scholar
  5. 5.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. ArXiv e-prints, December 2015Google Scholar
  6. 6.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: International Conference on Machine Learning (2010)Google Scholar
  8. 8.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. ArXiv e-prints, February 2015Google Scholar
  9. 9.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)Google Scholar
  10. 10.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. ArXiv e-prints, December 2014Google Scholar
  11. 11.
    Khudorozhkov, R., Illarionov, E., Kuvaev, A., Podvyaznikov, D.: CardIO library for deep research of heart signals (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Gazprom NeftSaint PetersburgRussia

Personalised recommendations