Advertisement

Recognition of Human Voice Utterances from Facial Surface EMG without Using Audio Signals

  • Sridhar Poosapadi Arjunan
  • Hans Weghorn
  • Dinesh Kant Kumar
  • Ganesh Naik
  • Wai Chee Yau
Conference paper
Part of the Lecture Notes in Business Information Processing book series (LNBIP, volume 12)

Abstract

This research examines the evaluation of fSEMG (facial surface Electromyogram) for recognizing speech utterances in English and German language. The raw sampling is performed without sensing any audio signal, and the system is designed for Human Computer Interaction (HCI) based on voice commands. An effective technique is presented, which exploits facial muscle activity of the articulatory muscles and human factors for silent vowel recognition. The muscle signals are reduced to activity parameters by temporal integration, and the matching process is performed by an artificial neural back-propagation network that has to be trained for each individual human user. In the experiments, different style and speed in speaking and different languages were investigated. Cross-validation was used to convert a limited set of single shot experiments into a broader statistical reliability test of the classification method. The experimental results show that this technique yields high recognition rates for all participants in both languages. These results also show that the system is easy to train for a human user, and this all suggests that the described recognition approach can work reliable for simple vowel based commands in HCI, especially when the user speaks one or more languages as also for people who suffer from certain speech disabilities.

Keywords

Root Mean Square Speech Recognition Human Computer Interaction Audio Signal Hand Gesture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ursula, H., Pierre, P., Sylvie, B.: Reactions to Emotional Facial Expressions: Affect or Cognition? Cognition and Emotion 12(4), 509–532 (1998)CrossRefGoogle Scholar
  2. 2.
    Manabe, H., Hiraiwa, A., Sugimura, T.: Unvoiced speech recognition using SEMG- mime speech recognition. In: ACM Conference on Human Factors in Computing Systems, Ft.Lauderdaler, Florida, USA, pp. 794–795 (2003)Google Scholar
  3. 3.
    Chan, A., Englehart, K., Hudgins, B., Lovely, D.: A multi-expert speech recognition system using acoustic and myoelectric signals. In: Proceedings of 24th Annual Conference and the Annual Fall Meeting of the Biomedical Engineering Society, Ottawa, Canada, vol. 1, pp. 72–73. IEEE, Los Alamitos (2002)CrossRefGoogle Scholar
  4. 4.
    Kumar, S., Kumar, D., Alemu, M., Burry, M.: EMG based voice recognition. In: Proceedings of Intelligent Sensors, Sensor Networks and Information Processing Conference, Melbourne, Australia. IEEE, Los Alamitos (2004)Google Scholar
  5. 5.
    Parsons, T.W.: Voice and speech processing, 1st edn. McGraw-Hill Book Company, New York (1986); Ursula, H., Pierre, P.: Facial reactions to emotional facial expressions: affect or cognition? Cognition and Emotion 12(4) (1998)Google Scholar
  6. 6.
    Lapatki, B.G., Stegeman, D.F., Jonas, I.E.: A surface EMG electrode for the simultaneous observation of multiple facial muscles. Journal of Neuroscience Methods 123(2), 117–128 (2003)CrossRefGoogle Scholar
  7. 7.
    Arjunan, S.P., Kumar, D.K., Yau, W.C., Weghorn, H.: Unvoiced speech control based on vowels detected by facial surface electromyogram. In: Proceedings of IADIS international conference e-Society 2006, Dublin, Ireland, vol. I, pp. 381–388 (2006)Google Scholar
  8. 8.
    Fridlund, A., Cacioppo, J.: Guidelines for human electromyographic research. Journal of Psychophysiology 23(4), 567–589 (1986); The Society for Psychophysiologial ResearchGoogle Scholar
  9. 9.
    Basmajian, J.V., Deluca, C.J.: Muscles Alive: Their Functions Revealed by Electromyography, 5th edn. Williams and Wilkins, Baltimore (1985)Google Scholar
  10. 10.
    Gutierrez-Osuna, R.: Lecture 13: Validation. Wright State University (2001) (Last access: October 2006), http://research.cs.tamu.edu/prism/lectures/iss
  11. 11.
    Beyer, W.H. (ed.): CRC Stndard Mathematical Tables, 28th edn., p. 127. CRC press, Boca Raton (1987)Google Scholar
  12. 12.
    Freedman, D., Pisani, R., Purves, R.: Statistics, 3rd edn. Norton College Books, New York (1997)Google Scholar
  13. 13.
    Naik, G.R., Kumar, D.K., Weghorn, H., Singh, V.P., Palaniswami, M.: Improving Isometric Hand Gesture Identification for HCI based on Independent Component Analysis in Bio-signal Processing. In: Fred, A., Jain, A.K. (eds.) 7th Int. Workshop on Pattern Recognition in Information Systems, pp. 171–180. INSTICC Press (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Sridhar Poosapadi Arjunan
    • 1
    • 2
  • Hans Weghorn
    • 1
  • Dinesh Kant Kumar
    • 2
  • Ganesh Naik
    • 1
    • 2
  • Wai Chee Yau
    • 1
    • 2
  1. 1.BA-University of Cooperative EducationSuttgartGermany
  2. 2.School of Electrical and Computer EngineeringRMIT UniversityMelbourneAustralia

Personalised recommendations