An Auditory Output Brain–Computer Interface for Speech Communication

  • Jonathan S. BrumbergEmail author
  • Frank H. Guenther
  • Philip R. Kennedy
Part of the SpringerBriefs in Electrical and Computer Engineering book series (BRIEFSELECTRIC)


Understanding the neural mechanisms underlying speech production can aid the design and implementation of brain–computer interfaces for speech communication. Specifically, the act of speech production is unequivocally a motor behavior; speech arises from the precise activation of all of the muscles of the respiratory and vocal mechanisms. Speech also preferentially relies on auditory output to communicate information between conversation partners. However, self-perception of one’s own speech is also important for maintaining error-free speech and proper production of intended utterances. This chapter discusses our efforts to use motor cortical neural output during attempted speech production for control of a communication BCI device by an individual with locked-in syndrome while taking advantage of neural circuits used for learning and maintaining speech. The end result is a BCI capable of producing instantaneously vocalized output within a framework of motor-based brain-computer interfacing that provides appropriate auditory feedback to the user.



Supported in part by CELEST, a National Science Foundation Science of Learning Center (NSF SMA-0835976) and the National Institute of Health (R03 DC011304, R44 DC007050-02).


  1. J. Bartels, D. Andreasen, P. Ehirim, H. Mao, S. Seibert, E.J. Wright, P. Kennedy, Neurotrophic electrode: method of assembly and implantation into human motor speech cortex. J. Neurosci. Methods 174(2), 168–176 (2008)CrossRefGoogle Scholar
  2. J.S. Brumberg, A. Nieto-Castanon, P.R. Kennedy, F.H. Guenther, Brain–computer interfaces for speech communication. Speech Commun. 52(4), 367–379 (2010)CrossRefGoogle Scholar
  3. C.S. DaSalla, H. Kambara, M. Sato, Y. Koike, Single-trial classification of vowel speech imagery using common spatial patterns. Neural Netw. 22(9), 1334–1339 (2009)CrossRefGoogle Scholar
  4. F.H. Guenther, A neural network model of speech acquisition and motor equivalent speech production. Biol. Cybern. 72(1), 43–53 (1994)CrossRefGoogle Scholar
  5. F.H. Guenther, S.S. Ghosh, J.A. Tourville, Neural modeling and imaging of the cortical interactions underlying syllable production. Brain Lang. 96(3), 280–301 (2006)CrossRefGoogle Scholar
  6. F.H. Guenther, J.S. Brumberg, E.J. Wright, A. Nieto-Castanon, J.A. Tourville, M. Panko, R. Law, S.A. Siebert, J.L. Bartels, D.S. Andreasen, P. Ehirim, H. Mao, P.R. Kennedy, A wireless brain-machine interface for real-time speech synthesis. PLoS ONE 4(12), e8218 (2009)CrossRefGoogle Scholar
  7. G. Hickok, Computational neuroanatomy of speech production. Nat. Rev. Neurosci. 13(2), 135–145 (2012)CrossRefGoogle Scholar
  8. L.R. Hochberg, M.D. Serruya, G.M. Friehs, J.A. Mukand, M. Saleh, A.H. Caplan, A. Branner, D. Chen, R.D. Penn, J.P. Donoghue, Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature 442(7099), 164–171 (2006)CrossRefGoogle Scholar
  9. J.F. Houde, S.S. Nagarajan, Speech production as state feedback control. Frontiers Human Neurosci. 5, 82 (2011)Google Scholar
  10. J.S. Brumberg, E.J. Wright, D.S. Andreasen, F.H. Guenther, P.R. Kennedy, Classification of intended phoneme production from chronic intracortical microelectrode recordings in speech-motor cortex. Frontiers Neurosci. 5(65), 1–14 (2011)Google Scholar
  11. S. Kellis, K. Miller, K. Thomson, R. Brown, P. House, B. Greger, Decoding spoken words using local field potentials recorded from the cortical surface. J. Neural Eng. 7(5), 056007 (2010)CrossRefGoogle Scholar
  12. P.R. Kennedy, The cone electrode: a long-term electrode that records from neurites grown onto its recording surface. J. Neurosci. Methods 29(3), 181–193 (1989)CrossRefGoogle Scholar
  13. E.C. Leuthardt, C. Gaona, M. Sharma, N. Szrama, J. Roland, Z. Freudenberg, J. Solis, J. Breshears, G. Schalk, Using the electrocorticographic speech network to control a brain–computer interface in humans. J. Neural Eng. 8(3), 036004 (2011)CrossRefGoogle Scholar
  14. F. Nijboer, A. Furdea, I. Gunst, J. Mellinger, D.J. McFarland, N. Birbaumer, A. Kübler, An auditory brain-computer interface (BCI). J. Neurosci. Methods 167(1), 43–50 (2008)CrossRefGoogle Scholar
  15. F. Plum, J.B. Posner, The diagnosis of stupor and coma. Contemp. Neurol. Series 10, 1–286 (1972)Google Scholar
  16. T. Blakely, K.J. Miller, R.P.N. Rao, M.D. Holmes, J.G. Ojemann, Localization and classification of phonemes using high spatial resolution electrocorticography (ECoG) grids, in IEEE Engineering in Medicine and Biology Society, vol. 2008, pp. 4964–4967, 2008Google Scholar
  17. J.R. Wolpaw, D.J. McFarland, Control of a two-dimensional movement signal by a noninvasive brain–computer interface in humans. Proc. Natl. Acad. Sci. U. S. A. 101(51), 17849 (2004)CrossRefGoogle Scholar

Copyright information

© The Author(s) 2013

Authors and Affiliations

  • Jonathan S. Brumberg
    • 1
    Email author
  • Frank H. Guenther
    • 2
  • Philip R. Kennedy
    • 3
  1. 1.Department of Speech-Language-HearingUniversity of KansasLawrenceUSA
  2. 2.Department of Speech, Language and Hearing Sciences, Department of Biomedical EngineeringBoston UniversityBostonUSA
  3. 3.Neural Signals, IncDuluthUSA

Personalised recommendations