Telecommunication Systems

, Volume 52, Issue 3, pp 1479–1489 | Cite as

Implementation of a robust absolute virtual head mouse combining face detection, template matching and optical flow algorithms

  • T. Pallejà
  • A. Guillamet
  • M. Tresanchez
  • M. Teixidó
  • A. F. del Viso
  • C. Rebate
  • J. Palacín
Article

Abstract

This work proposes the implementation of a robust absolute virtual head mouse based on the interpretation of head movements and face gestures captured with a frontal camera. The procedure combines face detection, template matching and optical flow algorithms to emulate all mouse events. This virtual device is designed specifically as an alternative non-contact pointer for people with mobility impairments in the upper extremities. The implementation of the virtual mouse was compared with a standard mouse, a touchpad and a joystick. Validation results show motion performances comparable to those of a standard mouse and better than those of a joystick in addition to good performances when detecting face gestures to generate click events: 96% success in the case of opening the mouth and 68% in the case of voluntary eye blinks.

Keywords

Virtual mouse Face detection Template matching Optical flow 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Grauman, K., Betke, M., Lombardi, J., Gips, J., & Bradski, G. R. (2003). Communication via eye blinks and eyebrow raises: video-based human-computer interfaces. Universal Access in the Information Society, 2(4), 359–373. CrossRefGoogle Scholar
  2. 2.
    Bradski, G. R. (1998). Computer vision face tracking for use in a perceptual user interface. Intel Technology Journal, 2(2), 1–15. Google Scholar
  3. 3.
    Betke, M., Gips, J., & Fleming, P. (2002). The camera mouse: visual tracking of body features to provide computer access for people with severe disabilities. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 10(1), 1–10. CrossRefGoogle Scholar
  4. 4.
    Gips, J., Betke, M., & DiMattia, P. (2001). Early experiences using visual tracking for computer access by people with profound physical disabilities. In C. Stephanidis (Ed.), Universal access in HCI: towards an information society for all (Vol. 3, pp. 914–918). Hillsdale: Lawrence Erlbaum Associates. Google Scholar
  5. 5.
    Kim, H., & Ryu, D. (2006). Computer control by tracking head movements for the disabled. Computers Helping People with Special Needs, 4061, 705–715. Google Scholar
  6. 6.
    Manresa-Yeel, C., Varonal, J., & Perales, F. J. (2006). Towards hands-free interfaces based on real-time robust facial gesture recognition. In Lecture notes in computer science (Vol. 4069, pp. 504–513). Berlin: Springer. Google Scholar
  7. 7.
    Tu, J., Tao, H., & Huang, T. (2007). Face as mouse through visual face tracking. Computer Vision and Image Understanding, 108(1), 35–40. CrossRefGoogle Scholar
  8. 8.
    Palleja, T., Rubion, E., Teixido, M., Tresanchez, M., Fernandez del Viso, A., Rebate, C., & Palacin, J. (2009). Using the optical flow to implement a relative virtual mouse controlled by head movements. Journal of Universal Computer Science, 14(19), 3127–3141. Google Scholar
  9. 9.
    Hyun, D., & Jin, M. (2004). Eye-mouse under large head movement for human-computer interface. In IEEE int. conf. on robotics and automation (Vol. 1, pp. 237–242). Google Scholar
  10. 10.
    Reilly, R., & O’Malley, M. (1999). Adaptive noncontact gesture-based system for augmentative communication. IEEE Transactions on Rehabilitation Engineering, 7(2), 174–182. CrossRefGoogle Scholar
  11. 11.
    Evans, D. G., Drew, R., & Blenkhorn, P. (2000). Controlling mouse pointer position using an infrared head-operated joystick. IEEE Transactions on Rehabilitation Engineering, 8(1), 107–117. CrossRefGoogle Scholar
  12. 12.
    Chen, Y., Tang, F., Chang, W., Wong, M., Shih, Y., & Kuo, T. (1999). The new design of an infrared-controlled human-computer interface for the disabled. IEEE Transactions on Rehabilitation Engineering, 7(4), 474–481. CrossRefGoogle Scholar
  13. 13.
    Pfurtscheller, G., Graimann, B., & Neuper, C. (2006). EEG-based brain-computer interface systems and signal processing. In M. Akay (Ed.), Encyclopedia of biomedical engineering (pp. 1156–1166). New York: Wiley. Google Scholar
  14. 14.
    DiMattia, F. X., Curran, P., & Gips, J. (2001). An eye control teaching device for students without language expressive capacity: EagleEyes. Lampeter: Edwin Mellen Press. Google Scholar
  15. 15.
    Eom, G.-M., Kim, K.-S., Kim, C.-S., Lee, J., Chung, S.-C., Lee, B., Higa, H., Furuse, N., Futami, R., & Watanabe, T. (2007). Gyro-mouse for the disabled: ‘click’ and ‘position’ control of the mouse cursor. International Journal of Control, Automation, and Systems, 5(2), 147–154. Google Scholar
  16. 16.
    Mazo, M., García, J. C., Rodríguez, F. J., Ureña, J., Lázaro, J. L., & Espinosa, F. (2002). Experiences in assisted mobility: the SIAMO project. In IEEE int. conf. on control applications (Vol. 2, pp. 766–771). CrossRefGoogle Scholar
  17. 17.
    Nutt, W., Arlanch, C., Nigg, S., & Staufert, G. (1998). Tongue-mouse for quadriplegics. Journal of Micromechanics and Microengineering, 8(2), 155–157. CrossRefGoogle Scholar
  18. 18.
    Huo, X., Wang, J., & Ghovanloo, M. (2008). A magneto-inductive sensor based wireless tongue-computer interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 16(5), 497–504. CrossRefGoogle Scholar
  19. 19.
    (2011). http://robotica.udl.cat/. Accessed 6 April 2011.
  20. 20.
    Viola, P., & Jones, M. (2001). Robust real-time face detection. In IEEE int. conf. on computer vision (Vol. 2, p. 747). Google Scholar
  21. 21.
    Lienhart, R., & Maydt, J. (2002). An extended set of Haar-like features for rapid object detection. In Int. conf. on image processing (Vol. 1, pp. 900–903). Google Scholar
  22. 22.
    Viola, P., & Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. In IEEE computer society conf. on computer vision and pattern recognition (Vol. 1, pp. 511–518). Google Scholar
  23. 23.
    Bolme, D. S., Strout, M., & Beveridge, J. R. (2007). FacePerf: benchmarks for face recognition algorithms. In IEEE international symposium on workload characterization (pp. 114–119). CrossRefGoogle Scholar
  24. 24.
    Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139. CrossRefGoogle Scholar
  25. 25.
    Huang, C., Ai, H., Li, Y., & Lao, S. (2007). High-performance rotation invariant multiview face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(4), 671–686. CrossRefGoogle Scholar
  26. 26.
    Intel OpenCV Library (2011). http://www.sourceforge.net/projects/opencvlibrary. Accesed April 2011.
  27. 27.
    Brunelli, R. (2009). Template matching techniques in computer vision: theory and practice. Chippenham: CPI Antony Rowe. CrossRefGoogle Scholar
  28. 28.
    Di Stefano, L., & Mattoccia, S. (2003). Fast template matching using bounded partial correlation. Journal of Machine Vision Applications, 13(4), 213–221. CrossRefGoogle Scholar
  29. 29.
    Adelson, E. H., Anderson, C. H., Bergen, J. R., Burt, P. J., & Ogden, J. M. (1984). Pyramid method in image processing. RCA Engineer, 29(6), 33–41. Google Scholar
  30. 30.
    Chang-Da, B., & Gray, R. (1985). An improvement of the minimum distortion encoding algorithm for vector quantization. IEEE Transactions on Communications, 33(10), 1132–1133. CrossRefGoogle Scholar
  31. 31.
    Li, W., & Salari, E. (1995). Successive elimination algorithm for motion estimation. IEEE Transactions on Image Processing, 4(1), 105–107. CrossRefGoogle Scholar
  32. 32.
    Vanderbrug, G. J., & Rosenfeld, A. (1977). Two-stage template matching. IEEE Transactions on Computers, C-26(4), 384–393. CrossRefGoogle Scholar
  33. 33.
    Zhu, J., & Yang, J. (2002). Subpixel eye gaze tracking. In IEEE international conference on automatic face and gesture recognition (pp. 124–129). Google Scholar
  34. 34.
    Thevenaz, P., Ruttimann, U. E., & Unser, M. (1998). A pyramid approach to subpixel registration based on intensity. IEEE Transactions on Image Processing, 7(1), 27–41. CrossRefGoogle Scholar
  35. 35.
    Gleason, S. S., Hunt, M. A., & Jatko, W. B. (1990). Subpixel measurement of image features based on paraboloid surface fit. In Machine vision systems integration in industry (Vol. 1386, pp. 135–144). Bellingham: SPIE Press. CrossRefGoogle Scholar
  36. 36.
    Barron, J. L., Fleet, D. J., Beauchemin, S. S., & Burkitt, T. A. (1992). Performance of optical flow techniques. In IEEE computer society conference on computer vision and pattern recognition (pp. 236–242). Google Scholar
  37. 37.
    Douxchamps, D., & Campbell, N. (2008). Robust real time face tracking for the analysis of human behaviour. In International conference on machine learning for multimodal interaction (pp. 1–10). CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • T. Pallejà
    • 1
  • A. Guillamet
    • 1
  • M. Tresanchez
    • 1
  • M. Teixidó
    • 1
  • A. F. del Viso
    • 2
  • C. Rebate
    • 2
  • J. Palacín
    • 1
  1. 1.Department of Computer Science and Industrial EngineeringUniversity of LleidaLleidaSpain
  2. 2.eInclusion Unit, IndraMadridSpain

Personalised recommendations