Frontiers of Electrical and Electronic Engineering in China

, Volume 1, Issue 4, pp 425–430

Audio-visual voice activity detection

Research Article

DOI: 10.1007/s11460-006-0081-5

Cite this article as:
Liu, P. & Wang, Z. Front. Electr. Electron. Eng. China (2006) 1: 425. doi:10.1007/s11460-006-0081-5

Abstract

In speech signal processing systems, frame-energy based voice activity detection (VAD) method may be interfered with the background noise and non-stationary characteristic of the frame-energy in voice segment. The purpose of this paper is to improve the performance and robustness of VAD by introducing visual information. Meanwhile, data-driven linear transformation is adopted in visual feature extraction, and a general statistical VAD model is designed. Using the general model and a two-stage fusion strategy presented in this paper, a concrete multimodal VAD system is built. Experiments show that a 55.0 % relative reduction in frame error rate and a 98.5% relative reduction in sentence-breaking error rate are obtained when using multimodal VAD, compared to frame-energy based audio VAD. The results show that using multimodal method, sentence-breaking errors are almost avoided, and frame-detection performance is clearly improved, which proves the effectiveness of the visual modal in VAD.

Keywords

speech recognitionvoice activity detectionmultimodal

Copyright information

© Higher Education Press and Springer-Verlag 2006

Authors and Affiliations

  1. 1.Department of Electronic EngineeringTsinghua UniversityBeijingChina