Skip to main content

Neural Decoding of Attentional Selection in Multi-speaker Environments Without Access to Clean Sources

  • Chapter
  • First Online:
Brain–Computer Interface Research

Abstract

People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Modern hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation without knowing which speaker is being attended to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. A number of challenges exist, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD. We present an end-to-end system that (1) receives a single audio channel containing a mixture of speakers that is heard by a listener along with the listener’s neural signals, (2) automatically separates the individual speakers in the mixture, (3) determines the attended speaker, and (4) amplifies the attended speaker’s voice to assist the listener. Using invasive electrophysiology recordings, our system is able to decode the attention of a subject and detect switches in attention using only the mixed audio. We also identified the regions of the auditory cortex that contribute to AAD. Our quality assessment of the modified audio demonstrates a significant improvement in both subjective and objective speech quality measures. Our novel framework for AAD bridges the gap between the most recent advancements in speech processing technologies and speech prosthesis research and moves us closer to the development of cognitively controlled hearing aids.

Research supported by NIH, NIDCD, DC014279.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. J.E. Peelle, A. Wingfield, The neural consequences of age-related hearing loss, Trends Neurosci. (2016)

    Google Scholar 

  2. J.L. Clark, D.W. Swanepoel, Technology for hearing loss–as we know it, and as we dream it. Disab. Rehabil. Assist. Tech. 9, 408–413 (2014)

    Article  Google Scholar 

  3. N. Ding, J.Z. Simon, Emergence of neural encoding of auditory objects while listening to competing speakers. Proc. Natl. Acad. Sci. U.S.A. 109, 11854–11859 (2012)

    Article  Google Scholar 

  4. A.J. Power, J.J. Foxe, E.J. Forde, R.B. Reilly, E.C. Lalor, At what time is the cocktail party? A late locus of selective attention to natural speech. Eur. J. Neurosci. 35, 1497–1503 (2012)

    Article  Google Scholar 

  5. N. Mesgarani, E.F. Chang, Selective cortical representation of attended speaker in multi-talker speech perception. Nature 485, 233-U118 (2012)

    Article  Google Scholar 

  6. J.A. O’Sullivan, A.J. Power, N. Mesgarani, S. Rajaram, J.J. Foxe, B.G. Shinn-Cunningham et al., Attentional selection in a cocktail party environment can be decoded from single-trial EEG. Cerebral Cortex 25, 1697–1706 (2015)

    Article  Google Scholar 

  7. S. Van Eyndhoven, T. Francart, A. Bertrand, EEG-informed attended speaker extraction from recorded speech mixtures with application in neuro-steered hearing prostheses. arXiv preprint arXiv:1602.05702 (2016)

  8. B. Mirkovic, S. Debener, M. Jaeger, M. De Vos, Decoding the attended speech stream with multi-channel EEG: implications for online, daily-life applications. J. Neural Eng. 12, 046007 (2015)

    Article  Google Scholar 

  9. M.G. Bleichner, B. Mirkovic, S. Debener, Identifying auditory attention with ear-EEG: cEEGrid versus high-density cap-EEG comparison. J. Neural Eng. 13, 066004 (2016)

    Article  Google Scholar 

  10. N. Das, S. Van Eyndhoven, T. Francart, A. Bertrand, Adaptive attention-driven speech enhancement for EEG-informed hearing prostheses, in 2016 IEEE 38th Annual International Conference of the Engineering in Medicine and Biology Society (EMBC) (2016) pp. 77–80

    Google Scholar 

  11. F. Weninger, J.R. Hershey, J. Le Roux, B. Schuller, Discriminatively trained recurrent neural networks for single-channel speech separation, in IEEE Global Conference on Signal and Information Processing (GlobalSIP), pp. 577–581 (2014)

    Google Scholar 

  12. http://naplab.ee.columbia.edu/nnaad.html

  13. J. Li, L. Deng, Y. Gong, R. Haeb-Umbach, An overview of noise-robust automatic speech recognition. IEEE/ACM Trans. Audio, Speech, Lang. Process. 22, 745–777 (2014)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to James O’Sullivan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

O’Sullivan, J. et al. (2020). Neural Decoding of Attentional Selection in Multi-speaker Environments Without Access to Clean Sources. In: Guger, C., Allison, B.Z., Miller, K. (eds) Brain–Computer Interface Research. SpringerBriefs in Electrical and Computer Engineering. Springer, Cham. https://doi.org/10.1007/978-3-030-49583-1_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-49583-1_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-49582-4

  • Online ISBN: 978-3-030-49583-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics