Skip to main content

Word-Based Classification of Imagined Speech Using EEG

  • Conference paper
  • First Online:
Computational Science and Technology (ICCST 2017)

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 488))

Included in the following conference series:

Abstract

Imagined speech is a process where a person imagines the sound of words without moving any of his or her muscles to actually say the word. If the brain signals of a person imagining the speech can be used to recognize the actual words intended to be spoken, this could be a huge step towards helping people with physical disabilities such as locked-in syndrome to have effective communication with others. This can also prove to be useful in situation where visual or audible communication is undesirable, for instance in military situation. Recent advancement in technologies and devices for capturing brain signals, particularly electroencephalogram (EEG), has made the research in recognizing imagined speech possible. While these are still in early years, published studies have shown promising results in this particular area of research. Current approaches in recognizing imagined speech can generally be divided into two, syllable-based and word-based. In this paper, we proposed a simple word-based approach using Mel Frequency Cepstral Coefficients (MFCC) and k-Nearest Neighbor (k-NN) towards recognizing two simple words using EEG signals. Despite its simplicity, the results obtained show some improvements to other studies based on dry EEG electrode device.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Denby, B., Schultz, T., Honda, K., Hueber, T., Gilbert, J.M., Brumberg, J.S.: Silent speech interfaces. Speech Commun. 52(4), 270–287 (2010)

    Article  Google Scholar 

  2. DaSalla, C.S.: Single-trial classification of vowel speech imagery using common spatial patterns. Neural Netw. 22(9), 1334–1339 (2009)

    Article  Google Scholar 

  3. Zmura, M.D., Deng, S., Lappas, T., Thorpe, S., Srinivasan, R.: Toward EEG sensing of imagined speech, pp. 40–48 (2009)

    Google Scholar 

  4. Brigham, K., Kumar, B.V.: Imagined speech classification with EEG signals for silent communication: a preliminary investigation into synthetic telepathy. In: 2010 4th International Conference on Bioinformatics Biomedical Engineering (iCBBE), pp. 1–4 (2010)

    Google Scholar 

  5. Barak, O., Nishith, K., Chopra, M.: Classifying Syllables in Imagined Speech using EEG Data, pp. 1–5 (2014)

    Google Scholar 

  6. Kamalakkannan, R., Rajkumar, R.: Imagined speech classification using EEG. Adv. Biomed. Sci. Eng. 1(2), 20–32 (2014)

    Google Scholar 

  7. Min, B., Kim, J., Park, H.J., Lee, B.: Vowel imagery decoding toward silent speech BCI using extreme learning machine with electroencephalogram. Biomed. Res. Int. 2016, 1–11 (2016)

    Article  Google Scholar 

  8. Wester, M., Schultz, T.: Unspoken Speech: Speech Recognition Based On Electroencephalography (Master Thesis). Institut für Theoretische Informatik, Universität Karlsruhe (TH), Karlsruhe (2006)

    Google Scholar 

  9. Porbadnigk, A., Wester, M., Calliess, J., Schultz, T.: EEG-based speech recognition: impact of temporal effects. In: International Conference on Bio-inspired Systems and Signal Processing (BIOSIGNALS) (2009)

    Google Scholar 

  10. García, A.A.T., García, C.A.R., Pineda, L.V.: Toward a silent speech interface based on unspoken speech. In: Van Huffel, S., Correia, C.M.B.A., Fred, A.L.N., Gamboa, H. (eds.) BIOSIGNALS, pp. 370–373. SciTePress (2012)

    Google Scholar 

  11. Salama, M., Elsherif, L., Lashin, H., Gamal, T.: Recognition of unspoken words using electrode electroencephalograhic signals. In: The Sixth International Conference on Advanced Cognitive Technologies and Applications, pp. 51–55 (2014)

    Google Scholar 

  12. Riaz, A., Akhtar, S., Iftikhar, S., Khan, A.A., Salman, A.: Inter comparison of classification techniques for vowel speech imagery using EEG sensors. In: 2014 2nd International Conference on Systems and Informatics, ICSAI 2014, pp. 712–717 (2014)

    Google Scholar 

  13. Moazzami, M.-M.: EEG Signal Processing in Brain-Computer Interface (Master Thesis). Michigan State University (2012)

    Google Scholar 

  14. Nguyen, P., Tran, D., Huang, X., Sharma, D.: A proposed feature extraction method for EEG-based person identification. In: International Conference on Artificial Intelligence (2012)

    Google Scholar 

  15. Hu, L.-Y., Huang, M.-W., Ke, S.-W., Tsai, C.-F.: The distance function effect on k-nearest neighbor classification for medical datasets. Springerplus 5(1), 1304 (2016)

    Article  Google Scholar 

  16. Martin, S., Brunner, P., Iturrate, I., Millán, J.D.R., Schalk, G., Knight, R.T., Pasley, B.N.: Word Pair classification during imagined speech using direct brain recordings. Sci. Rep. 6, 25803 (2016)

    Article  Google Scholar 

  17. Rojas, D.A., Ramos, O.L., Saby, J.E.: Recognition of Spanish vowels through imagined speech by using spectral analysis and SVM. J. Inf. Hiding Multimed. Signal Process. 7(4), 889–897 (2016)

    Google Scholar 

Download references

Acknowledgement

The author’s wishes to thank MOHE Fundamentals Research Grant (FRGS) with grant no FRGS/2/2014/ICT07/MMU/03/1 obtained through Multimedia University for supporting this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Noramiza Hashim .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hashim, N., Ali, A., Mohd-Isa, WN. (2018). Word-Based Classification of Imagined Speech Using EEG. In: Alfred, R., Iida, H., Ag. Ibrahim, A., Lim, Y. (eds) Computational Science and Technology. ICCST 2017. Lecture Notes in Electrical Engineering, vol 488. Springer, Singapore. https://doi.org/10.1007/978-981-10-8276-4_19

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-8276-4_19

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-8275-7

  • Online ISBN: 978-981-10-8276-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics