Horizontal Human Face Pose Determination Using Pupils and Skin Region Positions

  • Shahrel A. Suandi
  • Tie Sing Tai
  • Shuichi Enokida
  • Toshiaki Ejima
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4872)

Abstract

This paper describes a novel real-time technique to determine horizontal human face pose from a video color sequence. The idea underlying this technique is that when head is at an arbitrary pose to the right or left, there are significant relationships between the distance from center of both pupils to head center, and the distance between both pupils. From these distances, we compute a ratio known as ”horizontal ratio”. This ratio, besides being advantageous in the sense that it reduces the dependency on facial features tracking accuracy and robust to noise, is actually the quantity that is used to determine the horizontal human face pose. The technique is simple, computational cheap and requires only information that is usually retrievable from a face and facial feature tracker.

Keywords

Multiple view image and processing tracking and motion face pose horizontal ratio eyes and skin region 

References

  1. 1.
    Gee, A., Cipolla, R.: Determining the gaze of faces in images. Image and Vision Computing 12(10), 639–647 (1994)CrossRefGoogle Scholar
  2. 2.
    Park, K.R., Lee, J.J., Kim, J.: Gaze position detection by computing the three dimensional facial positions and motions. Pattern Recognition 35(11), 2559–2569 (2002)MATHCrossRefGoogle Scholar
  3. 3.
    Davis, J.W., Vaks, S.: A perceptual user interface for recognizing head gesture acknowledgements. In: PUI 2001: Proceedings of the 2001 workshop on Perceptive user interfaces, pp. 1–7. ACM Press, New York (2001)CrossRefGoogle Scholar
  4. 4.
    Heinzmann, J., Zelinsky, A.: Robust real-time face tracking and gesture recognition. In: International Joint Conference on Artificial Intelligence, IJCAI 1997, vol. 2, pp. 1525–1530 (1997)Google Scholar
  5. 5.
    Smith, P., Shah, M., da Vitoria Lobo, N.: Monitoring head/eye motion for driver alertness with one camera. In: IEEE International Conference on Pattern Recognition (ICPR 2000), pp. 4636–4642. IEEE Computer Society Press, Los Alamitos (2000)Google Scholar
  6. 6.
    Ji, Q., Yang, X.: Real-time eye, gaze and face pose tracking for monitoring driver vigilance. Real-Time Imaging 8(5), 357–377 (2002)MATHCrossRefMathSciNetGoogle Scholar
  7. 7.
    Yang, Z., Ai, H., Wu, B., Lao, S., Cai, L.: Face pose estimation and its application in video shot selection. In: IEEE International Conference on Pattern Recognition (ICPR 2004), vol. 1., pp. 322–325 (2004)Google Scholar
  8. 8.
    Garcia, C., Tziritas, G.: Face detection using quantized skin colour regions merging and wavelet packet analysis. IEEE Transactions on Multimedia MM-1, 264–277 (1999)CrossRefGoogle Scholar
  9. 9.
    Ji, Q., Hu, R.: 3d face pose estimation and tracking from a monocular camera. Image and Vision Computing 20(7), 499–511 (2002)CrossRefGoogle Scholar
  10. 10.
    Horprasert, T., Yacoob, Y., Davis, L.S.: Computing 3-d head orientation from a monocular image sequence. In: IEEE International Conference on Automatic Face and Gesture Recognition (FGR 1996), pp. 242–247. IEEE Computer Society Press, Los Alamitos (1996)CrossRefGoogle Scholar
  11. 11.
    Osuna, E., Freund, R., Girosit, F.: Training support vector machines: An application to face detection. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 1997), pp. 130–136. IEEE Computer Society Press, Los Alamitos (1997)Google Scholar
  12. 12.
    Schneiderman, H.W.: Learning statistical structure for object detection. In: Computer Analysis of Images and Pattern (CAIP), pp. 434–441. Springer, Heidelberg (2003)Google Scholar
  13. 13.
    Pentland, A., Moghaddam, B., Starner, T.: View-based and modular eigenspaces for face recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 1994), IEEE Computer Society Press, Los Alamitos (1994)Google Scholar
  14. 14.
    Ho, S.Y., Huang, H.L.: An analytic solution for the pose determination of human faces from a monocular image. Pattern Recognition Letters 19(11), 1045–1054 (1998)CrossRefGoogle Scholar
  15. 15.
    Suandi, S.A., Enokida, S., Ejima, T.: Emotracker: Eyes and mouth tracker based on energy minimizaton criterion. In: 4th Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP 2004), IAPR, pp. 269–274 (2004)Google Scholar
  16. 16.
    Young, J.W.: Head and face anthropometry of adult u.s. citizens. Technical Report R0221201, Beta Research Inc. (1993)Google Scholar
  17. 17.
    Gourier, N., Hall, D., Crowley, J.L.: Estimating Face Orientation from Robust Detection of Salient Facial Features. In: Proceedings of Pointing 2004, ICPR International Workshop on Visual Observation of Deictic Gestures (2004)Google Scholar
  18. 18.
    Cascia, M.L., Sclaroff, S., Athitsos, V.: Fast, reliable head tracking under varying illumination: An approach based on registration of texture-mapped 3d models. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 22(4), 322–336 (2000)CrossRefGoogle Scholar
  19. 19.
    Loy, C.Y., Suandi, S.A.: Precise pupils detection using separability filter. In: International Conference on Robotics, Vision, Information and Signal Processing (ROVISP) ( to be published, 2007)Google Scholar
  20. 20.
    Fukui, K., Yamaguchi, O.: Facial feature point extraction method based on combination of shape extraction and pattern matching. Systems and Computers in Japan 29(6), 49–58 (1998)CrossRefGoogle Scholar
  21. 21.
    Kawaguchi, T., Rizon, M.: Iris detection using intensity and edge information. Pattern Recognition 36(2), 549–562 (2003)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Shahrel A. Suandi
    • 1
    • 2
  • Tie Sing Tai
    • 1
  • Shuichi Enokida
    • 2
  • Toshiaki Ejima
    • 2
  1. 1.School of Electrical & Electronic Engineering, Universiti Sains Malaysia, Engineering Campus, 14300 Nibong Tebal, Pulau PinangMalaysia
  2. 2.Intelligence Media Laboratory, Department of Artificial Intelligence, Kyushu Institute of Technology, Kawazu 680-4, Iizuka City, Fukuoka Pref., 820-8502Japan

Personalised recommendations