Advertisement

Welfare Interface Using Multiple Facial Features Tracking

  • Yunhee Shin
  • Eun Yi Kim
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4304)

Abstract

We propose a welfare interface using multiple facial features tracking, which can efficiently implement various mouse operations. The proposed system consist of five modules: face detection, eye detection, mouth detection, facial features tracking, and mouse control. The facial region is first obtained using skin-color model and connected-component analysis (CCs). Thereafter the eye regions are localized using neural network (NN)-based texture classifier that discriminates the facial region into eye class and non-eye class, and then mouth region is localized using edge detector. Once eye and mouth regions are localized, they are continuously and correctly tracking by mean-shift algorithm and template matching, respectively. Based on the tracking results, mouse operations such as movement or click are implemented. To assess the validity of the proposed system, it was applied to the interface system for web browser and was tested on a group of 25 users. The results show that our system have the accuracy of 99% and process more than 12 frames/sec on PC for the 320(240 size input image, as such it can supply a user-friendly and convenient access to a computer in real-time operation.

Keywords

Facial Feature Face Detection Facial Region Search Window Mouth Region 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sharma, R., Pavlovic, V.I., Huang, T.S.: Toward multimodal human-computer interface. Proceedings of the IEEE 86, 853–869 (1998)CrossRefGoogle Scholar
  2. 2.
    Scassellati, B.: Eye finding via face detection for a foveated, active vision system. American Association for Artificial Intelligence (1998)Google Scholar
  3. 3.
    Hornof, A., Cavender, A., Hoselton, R.: EyeDraw: A System for Drawing Pictures with Eye Movements. ACM SIGACCESS Accessibility and Computing, Issue 77–78 (2003)Google Scholar
  4. 4.
    Kim, E.Y., Kang, S.K.: Eye Tracking using Neural Network and Mean-shift. In: Gavrilova, M.L., Gervasi, O., Kumar, V., Tan, C.J.K., Taniar, D., Laganá, A., Mun, Y., Choo, H. (eds.) ICCSA 2006. LNCS, vol. 3982, pp. 1200–1209. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  5. 5.
    Lyons, M.J.: Facial Gesture Interfaces for Expression and Communication. In: IEEE International Conference on Systems, Man, Cybernetics, vol. 1, pp. 598–603 (2004)Google Scholar
  6. 6.
    Jie, Y., DaQuan, Y., WeiNa, W., XiaoXia, X., Hui, W.: Real-time detecting system of the driver’s fatigue. In: ICCE International Conference on Consumer Electronics, 2006 Digest of Technical Papers, pp. 233–234 (2006)Google Scholar
  7. 7.
    Schiele, B., Waibel, A.: Gaze Tracking Based on Face-Color. School of Computer Science. Carnegie Mello University (1995)Google Scholar
  8. 8.
    Chan, a.d.c., Englehart, K., Hudgins, B., Lovely, D.F.: Hidden markov model classification of myoeletric signals in speech. IEEE Engineering in Medicine and Biology Magazine 21(4), 143–146 (2002)CrossRefGoogle Scholar
  9. 9.
    Comaniciu, D., Ramesh, V., Meer, P.: Kernel-Based Object Tracking. IEEE Trans. Pattern Analysis and Machine Intelligence 25(5), 564–577 (2003)CrossRefGoogle Scholar
  10. 10.
    Olson, C.F.: Maximum-Likelihood Template Matching. In: Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 52–57 (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Yunhee Shin
    • 1
  • Eun Yi Kim
    • 1
  1. 1.Department of Internet and Multimedia EngineeringKonkuk Univ.Korea

Personalised recommendations