Advertisement

Lip Detection Using Confidence-Based Adaptive Thresholding

  • Jin Young Kim
  • Seung You Na
  • Ronald Cole
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4291)

Abstract

In this paper we propose a lip detector based on adaptive thresholding for hue-transformed face images. The adaptation is performed according to the confidence values of the estimated lip regions. The confidence of lip means how much similarity exists between the detected lip region and a true lip. We construct simple fuzzy rules of the confidence using true lip statistics of center position, width and height. The threshold value is adaptively changed so that the confidence of a renewed lip region is maximized. By lip detection experiments with VidTimit database we demonstrate the performance enhancement of our proposed method.

Keywords

Membership Function Binary Image Face Image Speaker Recognition Speaker Identification 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Peter, C., Zelensky, M.: Using of Lip-Reading for Speech Recognition in Noisy Environments. In: Proc of the 13th Czech-German Workshop on Speech Processing (2004)Google Scholar
  2. 2.
    Gowdy, J.N., Subramanya, A., Bartels, C., Bilmes, J.: DBN Based Multi-stream Models For Audio-visual Speech Recognition. In: Proc of ICASSP 2004, vol. 1, pp. 17–21 (2004)Google Scholar
  3. 3.
    Fu, T., Liu, X.X., Liang, L.H., Pi, X., Nefian, A.V.: Audio-visual speaker identification using coupled hidden Markov models. In: Proc of ICIP 2003, vol. 3, pp. 14–17 (2003)Google Scholar
  4. 4.
    Caplier, A.: Lip Detection and Tracking. In: Proc. of 11th International Conference on Image Analysis and Processing, pp. 8–13 (2001)Google Scholar
  5. 5.
    Chetty, G., Wagner, M.: Automated Lip Feature Extraction. In: Proc. Image and Vision Computing, pp. 17–22 (2004)Google Scholar
  6. 6.
    Zhang, J.M., Tao, H., Wang, L.M., Zhan, Y.Z.: A Real-time Approach to the Lip-motion Extraction in Video Sequence. In: Proc of 2004 IEE International Conference on Systems, Man and Cybernetics, pp. 6423–6428 (2004)Google Scholar
  7. 7.
    Luthon, F., Leivin, M.: Lip Motion Automatic Detection. In: Proc of Scandinavian Conference on Image Analysis, vol. 1, pp. 253–260 (1997)Google Scholar
  8. 8.
    Jacek, C.: Using Aerial and Geometric Features in Automatic Lip-reading. In: Proc of Eurospeech 2002, vol. 4, pp. 2463–2466 (2002)Google Scholar
  9. 9.
    Kim, J.Y., Song, M.G., Na, S.Y., Baek, S.J., Choi, S.H., Lee, J.H.: Skin-Color Based Human Tracking Using a Probabilistic Noise Model Combined with Neural Network. In: Wang, J., Yi, Z., Żurada, J.M., Lu, B.-L., Yin, H. (eds.) ISNN 2006. LNCS, vol. 3972, pp. 419–428. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  10. 10.
    Sanderson, C.: VidTimit database documentation, http://users.rsise.anu.edu.au/~conrad/vidtimit/
  11. 11.
    Fasel, I., Fortenberry, B., Movellan, J.: A Generative Framework for Real Time Object Detection and Classification. Computer Vision and Image Understanding 98, 182–210 (2005)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Jin Young Kim
    • 1
  • Seung You Na
    • 1
  • Ronald Cole
    • 2
  1. 1.Dept. of Electronics and Computer Eng.Chonnam National UniversityGwangjuSouth Korea
  2. 2.CSLRUniversity of Colorado at BoulderBoulderUSA

Personalised recommendations