Recognition of Human Voice Utterances from Facial Surface EMG without Using Audio Signals
This research examines the evaluation of fSEMG (facial surface Electromyogram) for recognizing speech utterances in English and German language. The raw sampling is performed without sensing any audio signal, and the system is designed for Human Computer Interaction (HCI) based on voice commands. An effective technique is presented, which exploits facial muscle activity of the articulatory muscles and human factors for silent vowel recognition. The muscle signals are reduced to activity parameters by temporal integration, and the matching process is performed by an artificial neural back-propagation network that has to be trained for each individual human user. In the experiments, different style and speed in speaking and different languages were investigated. Cross-validation was used to convert a limited set of single shot experiments into a broader statistical reliability test of the classification method. The experimental results show that this technique yields high recognition rates for all participants in both languages. These results also show that the system is easy to train for a human user, and this all suggests that the described recognition approach can work reliable for simple vowel based commands in HCI, especially when the user speaks one or more languages as also for people who suffer from certain speech disabilities.
KeywordsRoot Mean Square Speech Recognition Human Computer Interaction Audio Signal Hand Gesture
Unable to display preview. Download preview PDF.
- 2.Manabe, H., Hiraiwa, A., Sugimura, T.: Unvoiced speech recognition using SEMG- mime speech recognition. In: ACM Conference on Human Factors in Computing Systems, Ft.Lauderdaler, Florida, USA, pp. 794–795 (2003)Google Scholar
- 3.Chan, A., Englehart, K., Hudgins, B., Lovely, D.: A multi-expert speech recognition system using acoustic and myoelectric signals. In: Proceedings of 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society, Ottawa, Canada, vol. 1, pp. 72–73. IEEE, Los Alamitos (2002)CrossRefGoogle Scholar
- 4.Kumar, S., Kumar, D., Alemu, M., Burry, M.: EMG based voice recognition. In: Proceedings of Intelligent Sensors, Sensor Networks and Information Processing Conference, Melbourne, Australia. IEEE, Los Alamitos (2004)Google Scholar
- 5.Parsons, T.W.: Voice and speech processing, 1st edn. McGraw-Hill Book Company, New York (1986); Ursula, H., Pierre, P.: Facial reactions to emotional facial expressions: affect or cognition? Cognition and Emotion 12(4) (1998)Google Scholar
- 7.Arjunan, S.P., Kumar, D.K., Yau, W.C., Weghorn, H.: Unvoiced speech control based on vowels detected by facial surface electromyogram. In: Proceedings of IADIS international conference e-Society 2006, Dublin, Ireland, vol. I, pp. 381–388 (2006)Google Scholar
- 8.Fridlund, A., Cacioppo, J.: Guidelines for human electromyographic research. Journal of Psychophysiology 23(4), 567–589 (1986); The Society for Psychophysiologial ResearchGoogle Scholar
- 9.Basmajian, J.V., Deluca, C.J.: Muscles Alive: Their Functions Revealed by Electromyography, 5th edn. Williams and Wilkins, Baltimore (1985)Google Scholar
- 10.Gutierrez-Osuna, R.: Lecture 13: Validation. Wright State University (2001) (Last access: October 2006), http://research.cs.tamu.edu/prism/lectures/iss
- 11.Beyer, W.H. (ed.): CRC Stndard Mathematical Tables, 28th edn., p. 127. CRC press, Boca Raton (1987)Google Scholar
- 12.Freedman, D., Pisani, R., Purves, R.: Statistics, 3rd edn. Norton College Books, New York (1997)Google Scholar
- 13.Naik, G.R., Kumar, D.K., Weghorn, H., Singh, V.P., Palaniswami, M.: Improving Isometric Hand Gesture Identification for HCI based on Independent Component Analysis in Bio-signal Processing. In: Fred, A., Jain, A.K. (eds.) 7th Int. Workshop on Pattern Recognition in Information Systems, pp. 171–180. INSTICC Press (2007)Google Scholar