Feature Extraction and Selection for Inferring User Engagement in an HCI Environment

  • Stylianos Asteriadis
  • Kostas Karpouzis
  • Stefanos Kollias
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5610)


In this paper, we present our work towards estimating the engagement of a person to the displayed information of a computer monitor. Deciding whether a user is attentive or not, and frustrated or not, helps adapting the displayed information of a computer in special environments, such as e-learning. The aim of the current work is the development of a method that can work user-independently, without necessitating special lighting conditions and with only requirements in terms of hardware, a computer and a web-camera.


User engagement Head Pose Eye Gaze Facial Feature tracking 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Asteriadis, S., Nikolaidis, N., Pitas, I., Pardàs, M.: Detection of facial characteristics based on edge information. In: 2nd International Conference on Computer Vision Theory and Applications (VISAPP), Barcelona, Spain, pp. 247–252 (2007)Google Scholar
  2. 2.
    Chiu, S.L.: Fuzzy Model Identification Based on Cluster Estimation. Journal of Intelligent and Fuzzy Systems 2(3), 267–278 (1994)Google Scholar
  3. 3.
    D’Orazio, T., Leo, M., Guaragnella, C., Distante, A.: A visual approach for driver inattention detection. Pattern Recognition 40(8), 2341–2355 (2007)CrossRefzbMATHGoogle Scholar
  4. 4.
    Fisher, R.A.: Statistical Methods for Research Workers. Hafner Publishing (1970)Google Scholar
  5. 5.
    Jang, J.S.R.: ANFIS: Adaptive-Network-Based Fuzzy Inference System. IEEE Transactions on Systems, Man, and Cybern. 23, 665–684 (1993)CrossRefGoogle Scholar
  6. 6.
    Matsumoto, Y., Ogasawara, T., Zelinsky, A.: Behavior recognition based on head pose and gaze direction measurement. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Takamatsu, Japan, pp. 2127–2132 (2000)Google Scholar
  7. 7.
    Otsuka, K., Takemae, Y., Yamato, J.: A probabilistic inference of multiparty-conversation structure based on markov-switching models of gaze patterns, head directions, and utterances. In: ICMI, pp. 191–198 (2005)Google Scholar
  8. 8.
    Smith, P., Member, S., Shah, M., Lobo, N.D.V.: Determining driver visual attention with one camera. IEEE Trans. on Intelligent Transportation Systems 4, 205–218 (2003)CrossRefGoogle Scholar
  9. 9.
    Takagi, T., Sugeno, M.: Fuzzy identification of systems and its applications to modeling and control. IEEE Trans. Syst. Man Cybern. 15(1), 116–132 (1985)CrossRefzbMATHGoogle Scholar
  10. 10.
    Victor, T., Blomberg, O., Zelinsky, A.: Automating driver visual behavior measurement. In: 9th Vision in Vehicles Conference, Australia (2001)Google Scholar
  11. 11.
    Viola, P.A., Jones, M.J.: Rapid object detection using a boosted cascade of simple features. In: Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 511–518 (2001)Google Scholar
  12. 12.
    Weidenbacher, U., Layher, G., Bayerl, P., Neumann, H.: Detection of head pose and gaze direction for human-computer interaction. In: André, E., Dybkjær, L., Minker, W., Neumann, H., Weber, M. (eds.) PIT 2006. LNCS, vol. 4021, pp. 9–19. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    Zhou, Z.H., Geng, X.: Projection functions for eye detection. Pattern Recognition 37(5), 1049–1056 (2004)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Stylianos Asteriadis
    • 1
  • Kostas Karpouzis
    • 1
  • Stefanos Kollias
    • 1
  1. 1.School of Electrical and Computer Engineering, Image, Video and Multimedia Systems LaboratoryNational Technical University of AthensZographouGreece

Personalised recommendations