Toward EEG Sensing of Imagined Speech

  • Michael D’Zmura
  • Siyi Deng
  • Tom Lappas
  • Samuel Thorpe
  • Ramesh Srinivasan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5610)


Might EEG measured while one imagines words or sentences provide enough information for one to identify what is being thought? Analysis of EEG data from an experiment in which two syllables are spoken in imagination in one of three rhythms shows that information is present in EEG alpha, beta and theta bands. Envelopes are used to compute filters matched to a particular experimental condition; the filters’ action on data from a particular trial lets one determine the experimental condition used for that trial with appreciably greater-than-chance performance. Informative spectral features within bands lead us to current work with EEG spectrograms.


EEG imagined speech covert speech classification 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Dewan, E.M.: Occipital alpha rhythm eye position and lens accommodation. Nature 214, 975–977 (1967)CrossRefGoogle Scholar
  2. 2.
    Farwell, L.A., Donchin, D.: Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials. Electroencephalogy and Clinical Neurophysiology 70, 510–523 (1988)CrossRefGoogle Scholar
  3. 3.
    Suppes, P., Lu, Z.-L., Han, B.: Brain wave recognition of words. Proceedings of the National Academy of Science USA 94, 14965–14969 (1997)CrossRefGoogle Scholar
  4. 4.
    Suppes, P., Han, B., Lu, Z.-L.: Brain wave recognition of sentences. Proceedings of the National Academy of Science USA 95, 15861–15866 (1998)CrossRefGoogle Scholar
  5. 5.
    Morse, M.S., O’Brien, E.M.: Research summary of a scheme to ascertain the availability of speech information in the myoelectric signals of neck and head muscles using surface electrodes. Computers in Biology and Medicine 16, 399–410 (1986)CrossRefGoogle Scholar
  6. 6.
    Chan, A.D.C., Englehart, K., Hudgins, B., Lovely, D.F.: Hidden Markov model classification of myoelectric signals in speech. IEEE Engineering in Medicine and Biology 21, 143–146 (2002a)CrossRefGoogle Scholar
  7. 7.
    Chan, A.D.C., Englehart, K., Hudgins, B., Lovely, D.F.: Multiexpert automatic speech recognition using acoustic and myoelectric signals. IEEE Transactions on Biomedical Engineering 53, 676–685 (2002b)CrossRefGoogle Scholar
  8. 8.
    Bu, N., Tsuji, T., Arita, J., Ohga, M.: Phoneme classification for speech synthesizer using differential EMG signals between muscles. In: Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, pp. 5962–5966 (2005)Google Scholar
  9. 9.
    Jou, S.-C., Maier-Hein, L., Schultz, T., Waibel, A.: Articulatory feature classification using surface electromyography. In: Acoustics, Speech and Signal Processing, ICASSP 2006 Proceedings, pp. I–605–I–608 (2006)Google Scholar
  10. 10.
    Jou, S.-C., Schultz, T., Walliczek, M., Kraft, F., Waibel, A.: Towards continuous speech recognition using surface electromyography. In: Interspeech 2006 – ICSLP, pp. 573–576 (2006)Google Scholar
  11. 11.
    Jorgensen, C., Lee, D.D., Agabon, S.: Sub-auditory speech recognition based on EMG signals. In: Proceedings of the International Joint Conference on Neural Networks, Portland Oregon (July 2003)Google Scholar
  12. 12.
    Betts, B.J., Jorgensen, C.: Small vocabulary recognition using surface electromyography in an acoustically harsh environment. NASA/TM-2005-213471 (2005)Google Scholar
  13. 13.
    Maier-Hein, L.: Speech recognition using surface electromyography. Diplomarbeit, Universität Karlsruhe (2005)Google Scholar
  14. 14.
    Maier-Hein, L., Metze, F., Schultz, T., Waibel, A.: Session independent non-audible speech recognition using surface electromyography. In: 2005 IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 331–336 (2005)Google Scholar
  15. 15.
    Jorgensen, C., Binsted, K.: Web browser control using EMG based sub vocal speech recognition. In: Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS 2005), pp. 294–301 (2005)Google Scholar
  16. 16.
    Binsted, K., Jorgensen, C.: Sub-auditory speech recognition. In: HICSS (2006)Google Scholar
  17. 17.
    Walliczek, M., Kraft, F., Jou, S.-C., Schultz, T., Waibel, A.: Sub-word unit based non-audible speech recognition using surface electromyography. In: Interspeech 2006 - ICSLP, pp. 1487–1490 (2006)Google Scholar
  18. 18.
    Numminen, J., Curio, G.: Differential effects of overt, covert and replayed speech on vowel-evoked responses of the human auditory cortex. Neuroscience Letters 272(1), 29–32 (1999)CrossRefGoogle Scholar
  19. 19.
    Ahissar, E., Nagarajan, S., Ahissar, M., Protopapas, A., Mahncke, H., Merzenich, M.M.: Speech comprehension is correlated with temporal response patterns recorded from auditory cortex. Proceedings of the National Academy of Sciences 98, 13367–13372 (2001)CrossRefGoogle Scholar
  20. 20.
    Houde, J.F., Nagarajan, S.S., Sekihara, K., Merzenich, M.M.: Modulation of the auditory cortex during speech: an MEG study. Journal of Cognitive Neuroscience 15, 1125–1138 (2002)CrossRefGoogle Scholar
  21. 21.
    Luo, H., Poeppel, D.: Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex. Neuron 54, 1001–1010 (2007)CrossRefGoogle Scholar
  22. 22.
    Nunez, P.L., Srinivasan, R.: Electric Fields of the Brain: The Neurophysics of EEG, 2nd edn. Oxford University Press, New York (2006)CrossRefGoogle Scholar
  23. 23.
    Barras, C.: Brain implant helps stroke victim speak again. New Scientist (July 2008)Google Scholar
  24. 24.
    Indefrey, P., Levelt, W.J.M.: The spatial and temporal signatures of word production components. Cognition 92, 101–144 (2004)CrossRefGoogle Scholar
  25. 25.
    Hickok, G., Poeppel, D.: Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language. Cognition 92, 67–99 (2004)CrossRefGoogle Scholar
  26. 26.
    Hickok, G., Poeppel, D.: The cortical organization of speech processing. Nature Reviews Neuroscience 8, 393–402 (2007)CrossRefGoogle Scholar
  27. 27.
    Poeppel, D., Idsardi, W.M., van Wassenhove, V.: Speech perception at the interface of neurobiology and linguistics. Philosophical Transactions of the Royal Society B 363, 1071–1086 (2007)CrossRefGoogle Scholar
  28. 28.
    Fetz, E.: Volitional control of neural activity: implications for brain-computer interfaces. Journal of Physiology 579, 571–579 (2007)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Michael D’Zmura
    • 1
  • Siyi Deng
    • 1
  • Tom Lappas
    • 1
  • Samuel Thorpe
    • 1
  • Ramesh Srinivasan
    • 1
  1. 1.Department of Cognitive SciencesUC Irvine, SSPBIrvineUSA

Personalised recommendations