Advertisement

BRAIN2DEPTH: Lightweight CNN Model for Classification of Cognitive States from EEG Recordings

Conference paper
  • 149 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12722)

Abstract

Several Convolutional Deep Learning models have been proposed to classify the cognitive states utilizing several neuro-imaging domains. These models have achieved significant results, but they are heavily designed with millions of parameters, which increases train and test time, making the model complex and less suitable for real-time analysis. This paper proposes a simple, lightweight CNN model to classify cognitive states from Electroencephalograph (EEG) recordings. We develop a novel pipeline to learn distinct cognitive representation consisting of two stages. The first stage is to generate the 2D spectral images from neural time series signals in a particular frequency band. Images are generated to preserve the relationship between the neighboring electrodes and the spectral property of the cognitive events. The second is to develop a time-efficient, computationally less loaded, and high-performing model. We design a network containing 4 blocks and major components include standard and depth-wise convolution for increasing the performance and followed by separable convolution to decrease the number of parameters which maintains the tradeoff between time and performance. We experiment on open access EEG meditation dataset comprising expert, nonexpert meditative, and control states. We compare performance with six commonly used machine learning classifiers and four state of the art deep learning models. We attain comparable performance utilizing less than 4% of the parameters of other models. This model can be employed in a real-time computation environment such as neurofeedback.

Keywords

EEG CNN Deep Learning Meditation NeuroFeedback Neural signals 

Notes

Acknowledgement

We thank SERB and PlayPower Labs for supporting PMRF Fellowship. We thank FICCI to facilitate this PMRF Fellowship. We thank Pragati Gupta and Nashra Ahmad for their valuable feedback.

References

  1. 1.
    Bashivan, P., Rish, I., Yeasin, M., Codella, N.: Learning representations from eeg with deep recurrent-convolutional neural networks. arXiv preprint arXiv:1511.06448 (2015)
  2. 2.
    Basso, J.C., McHale, A., Ende, V., Oberlin, D.J., Suzuki, W.A.: Brief, daily meditation enhances attention, memory, mood, and emotional regulation in non-experienced meditators. Behav. Brain Res. 356, 208–220 (2019)CrossRefGoogle Scholar
  3. 3.
    Braboszcz, C., Cahn, B.R., Levy, J., Fernandez, M., Delorme, A.: Increased gamma brainwave amplitude compared to control in three different meditation traditions. PLoS ONE 12(1), e0170647 (2017)Google Scholar
  4. 4.
    Brandmeyer: Bids-standard: bids-standard/bids-examples. https://github.com/bids-standard/bids-examples/tree/master/eeg_rishikesh
  5. 5.
    Brandmeyer, T., Delorme, A.: Reduced mind wandering in experienced meditators and associated eeg correlates. Exp. Brain Res. 236(9), 2519–2528 (2018).  https://doi.org/10.1007/s00221-016-4811-5CrossRefGoogle Scholar
  6. 6.
    Brandmeyer, T., Delorme, A.: Closed-loop frontal midline \(\theta \) neurofeedback: a novel approach for training focused-attention meditation. Front. Hum. Neurosci. 14, 246 (2020)CrossRefGoogle Scholar
  7. 7.
    Brandmeyer, T., Delorme, A., Wahbeh, H.: The neuroscience of meditation: classification, phenomenology, correlates, and mechanisms. In: Progress in Brain Research, vol. 244, pp. 1–29. Elsevier (2019)Google Scholar
  8. 8.
    Buitinck, L., et al.: API design for machine learning software: experiences from the scikit-learn project. In: ECML PKDD Workshop: Languages for Data Mining and Machine Learning, pp. 108–122 (2013)Google Scholar
  9. 9.
    Chollet, F.: Xception: Deep learning with depthwise separable convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258 (2017)Google Scholar
  10. 10.
    Chollet, F., et al.: Keras (2015). https://github.com/fchollet/keras
  11. 11.
    Delorme, A., Makeig, S.: Eeglab: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 134(1), 9–21 (2004)CrossRefGoogle Scholar
  12. 12.
    Dhillon, A., Verma, G.K.: Convolutional neural network: a review of models, methodologies and applications to object detection. Prog. Artif. Intell. 9(2), 85–112 (2020)CrossRefGoogle Scholar
  13. 13.
  14. 14.
  15. 15.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  16. 16.
    Howard, A.G., et al.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
  17. 17.
    Hua, B.S., Tran, M.K., Yeung, S.K.: Pointwise convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 984–993 (2018)Google Scholar
  18. 18.
    Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
  19. 19.
    Jiang, X., Bian, G.B., Tian, Z.: Removal of artifacts from EEG signals: a review. Sensors 19(5), 987 (2019)CrossRefGoogle Scholar
  20. 20.
    Khan, A., Sohail, A., Zahoora, U., Qureshi, A.S.: A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 53(8), 5455–5516 (2020)CrossRefGoogle Scholar
  21. 21.
    Lee, D.J., Kulubya, E., Goldin, P., Goodarzi, A., Girgis, F.: Review of the neural oscillations underlying meditation. Front. Neurosci. 12, 178 (2018)CrossRefGoogle Scholar
  22. 22.
    Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: ICML (2010)Google Scholar
  23. 23.
    Pandey, P., Miyapuram, K.P.: Classifying oscillatory signatures of expert vs nonexpert meditators. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE (2020)Google Scholar
  24. 24.
    Payan, A., Montana, G.: Predicting alzheimer’s disease: a neuroimaging study with 3d convolutional neural networks. arXiv preprint arXiv:1502.02506 (2015)
  25. 25.
    Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv 2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)Google Scholar
  26. 26.
    Sho’ouri, N., Firoozabadi, M., Badie, K.: The effect of beta/alpha neurofeedback training on imitating brain activity patterns in visual artists. Biomed. Sig. Process. Control 56, 101661 (2020)Google Scholar
  27. 27.
    Sifre, L., Mallat, S.: Rigid-motion scattering for image classification. Ph. D. thesis (2014)Google Scholar
  28. 28.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  29. 29.
    Sonawane, D., Miyapuram, K.P., Rs, B., Lomas, D.J.: Guessthemusic: song identification from electroencephalography response. In: 8th ACM IKDD CODS and 26th COMAD, pp. 154–162 (2021)Google Scholar
  30. 30.
    Wu, J., Tang, T., Chen, M., Wang, Y., Wang, K.: A study on adaptation lightweight architecture based deep learning models for bearing fault diagnosis under varying working conditions. Expert Syst. Appl. 160, 113710 (2020)Google Scholar
  31. 31.
    Zhang, Q., Zhang, M., Chen, T., Sun, Z., Ma, Y., Yu, B.: Recent advances in convolutional neural network acceleration. Neurocomputing 323, 37–51 (2019)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Computer Science and EngineeringIIT GandhinagarAhmedabadIndia
  2. 2.Centre for Cognitive and Brain SciencesIIT GandhinagarAhmedabadIndia

Personalised recommendations