Circuits, Systems, and Signal Processing

, Volume 34, Issue 4, pp 1279–1304

Mouth State Detection From Low-Frequency Ultrasonic Reflection

Article

DOI: 10.1007/s00034-014-9904-4

Cite this article as:
McLoughlin, I.V. & Song, Y. Circuits Syst Signal Process (2015) 34: 1279. doi:10.1007/s00034-014-9904-4
  • 131 Downloads

Abstract

This paper develops, simulates and experimentally evaluates a novel method based on non-contact low frequency (LF) ultrasound which can determine, from airborne reflection, whether the lips of a subject are open or closed. The method is capable of accurately distinguishing between open and closed lip states through the use of a low-complexity detection algorithm, and is highly robust to interfering audible noise. A novel voice activity detector is implemented and evaluated using the proposed method and shown to detect voice activity with high accuracy, even in the presence of high levels of background noise. The lip state detector is evaluated at a number of angles of incidence to the mouth and under various conditions of background noise. The underlying mouth state detection technique relies upon an inaudible LF ultrasonic excitation, generated in front of the face of a user, either reflecting back from their face as a simple echo in the closed mouth state or resonating inside the open mouth and vocal tract, affecting the spectral response of the reflected wave when the mouth is open. The difference between echo and resonance behaviours is used as the basis for automated lip opening detection, which implies determining whether the mouth is open or closed at the lips. Apart from this, potential applications include use in voice generation prosthesis for speech impaired patients, or as a hands-free control for electrolarynx and similar rehabilitation devices. It is also applicable to silent speech interfaces and may have use for speech authentication.

Keywords

Lip state detection Low frequency ultrasound Mouth state detection Speech activity detection Voice activity detection 

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.National Engineering Laboratory of Speech and Language Information ProcessingThe University of Science & Technology of ChinaHefeiChina

Personalised recommendations