A Haptic-Based Application for Active Exploration of Facial Expressions by the Visually Impaired

  • Shamima Yasmin
  • Troy McDaniel
  • Sethuraman Panchanathan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8888)


In this paper, a haptic-based interpretation of video images directly acquired by a webcam is introduced to help individuals who are visually impaired dynamically explore their own faces in an immersive haptic environment. Haptic interaction involves perception of the movement of different facial features through geometric cues in a 3D environment. Spatio-temporal variation of features due to facial movement assists those who are visually impaired to understand different facial expressions of emotions through active exploration of their own facial features. A dynamic haptic interface appropriate for this kind of interaction has been discussed. Lastly, application of this kind of training has been proposed for communication among individuals who are deafblind.


Facial Expression Feature Point Facial Feature Anchor Point Active Exploration 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Mase, K.: Recognition of facial expression from optical flow. IEICE Transactions 74(10), 3474–3483 (1991)Google Scholar
  2. 2.
    Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Neu-mann, U., Narayanan, S.: Analysis of emotion recognition using facial expressions, speech and multimodal information. In: Proceedings of the 6th International Conference on Multimodal Interfaces (ICMI), pp. 205–211 (2004)Google Scholar
  3. 3.
    Nicolle, J., Rapp, V., Bailly, K., Prevost, L., Chetouani, M.: Robust continuous prediction of human emotions using multiscale dynamic cues. In: Proceedings of the 14th International Conference on Multimodal Interfaces (ICMI), pp. 501–508 (2012)Google Scholar
  4. 4.
    Xiong, X., Torre, F.: Supervised descent method and its application to face alignment. In: Proceedings of Computer Vision and Pattern Recognition (CVPR) (2013)Google Scholar
  5. 5.
    Zeng, M., Liang, L., Liu, X., Bao, H.: Video-driven state-aware facial animation. Computer Animation and Virtual Worlds 23, 167–178 (2012)CrossRefGoogle Scholar
  6. 6.
    Lederman, S.J., Klatzky, R.L., Abramowiez, A., Salsman, K., Kitada, R., Hamilton, C.: Haptic Recognition of Static and Dynamic Expressions of Emotion in the Live Face. Psychological Science 18(2), 158–164 (2006)CrossRefGoogle Scholar
  7. 7.
    Lederman, S.J., Klatzky, R.L., Rennert-May, E., Lee, J.H., Ng, K., Hamilton, C.: Haptic Processing of Facial Expressions of Emotion in 2D Raised-Line Drawings. IEEE Transactions on Haptics 1(1), 27–38 (2008)CrossRefGoogle Scholar
  8. 8.
    Bailenson, J.N., Yee, N., Brave, S., Merget, D., Koslow, D.: Virtual interpersonal touch: expressing and recognizing emotions through haptic devices. Human-computer Interaction 22, 325–353 (2007)Google Scholar
  9. 9.
    Krishna, S., Bala, S., McDaniel, T.L., McGuire, S., Panchanathan, S.: VibroGlove: an assistive technology aid for conveying facial expressions. In: CHI Extended Abstracts, pp. 3637–3642 (2010)Google Scholar
  10. 10.
    Dopjans, L., Bülthoff, H.H., Wallraven, C.: Serial exploration of faces: comparing vision and touch. Journal of Vision 12(1), 1–14 (2012)CrossRefGoogle Scholar
  11. 11.
    Way, T.P., Barner, K.E.: Automatic visual to tactile translation, Part I: Human factors, access methods, and image manipulation. IEEE Transactions on Rehabilitation Engineering 5(1), 81–94 (1997)CrossRefGoogle Scholar
  12. 12.
    Way, T.P., Barner, K.E.: Automatic visual to tactile translation, Part II: Evaluation of the TACTile Image Creation System. IEEE Transactions on Rehabilitation Engineering 5(1), 95–105 (1997)CrossRefGoogle Scholar
  13. 13.
    Kim, S., Israr, A., Poupyrev, I.: Tactile rendering of 3D features on touch surfaces. In: Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (UIST), pp. 531–538 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Shamima Yasmin
    • 1
  • Troy McDaniel
    • 1
  • Sethuraman Panchanathan
    • 1
  1. 1.Center for Cognitive Ubiquitous Computing, School of Computing, Informatics and Decision Systems EngineeringArizona State UniversityTempeUSA

Personalised recommendations