Abstract
Imagined speech is a process where a person imagines the sound of words without moving any of his or her muscles to actually say the word. If the brain signals of a person imagining the speech can be used to recognize the actual words intended to be spoken, this could be a huge step towards helping people with physical disabilities such as locked-in syndrome to have effective communication with others. This can also prove to be useful in situation where visual or audible communication is undesirable, for instance in military situation. Recent advancement in technologies and devices for capturing brain signals, particularly electroencephalogram (EEG), has made the research in recognizing imagined speech possible. While these are still in early years, published studies have shown promising results in this particular area of research. Current approaches in recognizing imagined speech can generally be divided into two, syllable-based and word-based. In this paper, we proposed a simple word-based approach using Mel Frequency Cepstral Coefficients (MFCC) and k-Nearest Neighbor (k-NN) towards recognizing two simple words using EEG signals. Despite its simplicity, the results obtained show some improvements to other studies based on dry EEG electrode device.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Denby, B., Schultz, T., Honda, K., Hueber, T., Gilbert, J.M., Brumberg, J.S.: Silent speech interfaces. Speech Commun. 52(4), 270–287 (2010)
DaSalla, C.S.: Single-trial classification of vowel speech imagery using common spatial patterns. Neural Netw. 22(9), 1334–1339 (2009)
Zmura, M.D., Deng, S., Lappas, T., Thorpe, S., Srinivasan, R.: Toward EEG sensing of imagined speech, pp. 40–48 (2009)
Brigham, K., Kumar, B.V.: Imagined speech classification with EEG signals for silent communication: a preliminary investigation into synthetic telepathy. In: 2010 4th International Conference on Bioinformatics Biomedical Engineering (iCBBE), pp. 1–4 (2010)
Barak, O., Nishith, K., Chopra, M.: Classifying Syllables in Imagined Speech using EEG Data, pp. 1–5 (2014)
Kamalakkannan, R., Rajkumar, R.: Imagined speech classification using EEG. Adv. Biomed. Sci. Eng. 1(2), 20–32 (2014)
Min, B., Kim, J., Park, H.J., Lee, B.: Vowel imagery decoding toward silent speech BCI using extreme learning machine with electroencephalogram. Biomed. Res. Int. 2016, 1–11 (2016)
Wester, M., Schultz, T.: Unspoken Speech: Speech Recognition Based On Electroencephalography (Master Thesis). Institut für Theoretische Informatik, Universität Karlsruhe (TH), Karlsruhe (2006)
Porbadnigk, A., Wester, M., Calliess, J., Schultz, T.: EEG-based speech recognition: impact of temporal effects. In: International Conference on Bio-inspired Systems and Signal Processing (BIOSIGNALS) (2009)
GarcÃa, A.A.T., GarcÃa, C.A.R., Pineda, L.V.: Toward a silent speech interface based on unspoken speech. In: Van Huffel, S., Correia, C.M.B.A., Fred, A.L.N., Gamboa, H. (eds.) BIOSIGNALS, pp. 370–373. SciTePress (2012)
Salama, M., Elsherif, L., Lashin, H., Gamal, T.: Recognition of unspoken words using electrode electroencephalograhic signals. In: The Sixth International Conference on Advanced Cognitive Technologies and Applications, pp. 51–55 (2014)
Riaz, A., Akhtar, S., Iftikhar, S., Khan, A.A., Salman, A.: Inter comparison of classification techniques for vowel speech imagery using EEG sensors. In: 2014 2nd International Conference on Systems and Informatics, ICSAI 2014, pp. 712–717 (2014)
Moazzami, M.-M.: EEG Signal Processing in Brain-Computer Interface (Master Thesis). Michigan State University (2012)
Nguyen, P., Tran, D., Huang, X., Sharma, D.: A proposed feature extraction method for EEG-based person identification. In: International Conference on Artificial Intelligence (2012)
Hu, L.-Y., Huang, M.-W., Ke, S.-W., Tsai, C.-F.: The distance function effect on k-nearest neighbor classification for medical datasets. Springerplus 5(1), 1304 (2016)
Martin, S., Brunner, P., Iturrate, I., Millán, J.D.R., Schalk, G., Knight, R.T., Pasley, B.N.: Word Pair classification during imagined speech using direct brain recordings. Sci. Rep. 6, 25803 (2016)
Rojas, D.A., Ramos, O.L., Saby, J.E.: Recognition of Spanish vowels through imagined speech by using spectral analysis and SVM. J. Inf. Hiding Multimed. Signal Process. 7(4), 889–897 (2016)
Acknowledgement
The author’s wishes to thank MOHE Fundamentals Research Grant (FRGS) with grant no FRGS/2/2014/ICT07/MMU/03/1 obtained through Multimedia University for supporting this research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Hashim, N., Ali, A., Mohd-Isa, WN. (2018). Word-Based Classification of Imagined Speech Using EEG. In: Alfred, R., Iida, H., Ag. Ibrahim, A., Lim, Y. (eds) Computational Science and Technology. ICCST 2017. Lecture Notes in Electrical Engineering, vol 488. Springer, Singapore. https://doi.org/10.1007/978-981-10-8276-4_19
Download citation
DOI: https://doi.org/10.1007/978-981-10-8276-4_19
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-8275-7
Online ISBN: 978-981-10-8276-4
eBook Packages: EngineeringEngineering (R0)