Advertisement

Generalization of a Vision-Based Computational Model of Mind-Reading

  • Rana el Kaliouby
  • Peter Robinson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3784)

Abstract

This paper describes a vision-based computational model of mind-reading that infers complex mental states from head and facial expressions in real-time. The generalization ability of the system is evaluated on videos that were posed by lay people in a relatively uncontrolled recording environment for six mental states—agreeing, concentrating, disagreeing, interested, thinking and unsure. The results show that the system’s accuracy is comparable to that of humans on the same corpus.

Keywords

Facial Expression Basic Emotion Dynamic Bayesian Network Facial Action Code System Facial Display 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baron-Cohen, S.: Mindblindness: An Essay on Autism and Theory of Mind. MIT Press, Cambridge (1995)Google Scholar
  2. 2.
    Baron-Cohen, S., Golan, O., Wheelwright, S., Hill, J.J.: Mind Reading: The Interactive Guide to Emotions. Jessica Kingsley Publishers, London (2004)Google Scholar
  3. 3.
    Baron-Cohen, S., Wheelwright, S., Hill, J., Raste, Y., Plumb, I.: The Reading the Mind in the Eyes Test Revised Version: A Study with Normal Adults, and Adults with Asperger Syndrome or High-functioning Autism. Journal of Child Psychology and Psychiatry 42(2), 241–251 (2001)CrossRefGoogle Scholar
  4. 4.
    Ekman, P., Friesen, W.V.: Pictures of Facial Affect. Consulting Psychologists (1976)Google Scholar
  5. 5.
    Ekman, P., Friesen, W.V.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists (1978)Google Scholar
  6. 6.
    Fasel, B., Luettin, J.: Automatic Facial Expression Analysis: A Survey. Pattern Recognition 36, 259–275 (2003)zbMATHCrossRefGoogle Scholar
  7. 7.
    Gu, H., Ji, Q.: Facial Event Classification with Task Oriented Dynamic Bayesian Network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, pp. 870–875 (2004)Google Scholar
  8. 8.
    el Kaliouby, R.: Mind-Reading Machines: Automated Inference of Complex Mental States. Phd thesis, University of Cambridge, Computer Laboratory (2005)Google Scholar
  9. 9.
    el Kaliouby, R., Robinson, P.: Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures. In: Real-Time Vision for Human Computer Interaction, pp. 181–200. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  10. 10.
    Kanade, T., Cohn, J., Tian, Y.-L.: Comprehensive Database for Facial Expression Analysis. In: Proceedings of International Conference on Automatic Face and Gesture Recognition, pp. 46–53 (2000)Google Scholar
  11. 11.
    Kapoor, A., Picard, R.W., Ivanov, Y.: Probabilistic Combination of Multiple Modalities to Detect Interest. In: Proceedings of the International Conference on Pattern Recognition (ICPR), vol. 3, pp. 969–972 (2004)Google Scholar
  12. 12.
    Littlewort, G., Bartlett, M.S., Fasel, I., Susskind, J., Movellan, J.R.: Dynamics of Facial Expression Extracted Automatically from Video. In: Face Processing in Video Workshop at the CVPR 2004 (2004)Google Scholar
  13. 13.
    Michel, P., el Kaliouby, R.: Real Time Facial Expression Recognition in Video using Support Vector Machines. In: Proceedings of the IEEE International Conference on Multimodal Interfaces (ICMI), pp. 258–264 (2003)Google Scholar
  14. 14.
    Murphy, K.P.: Dynamic Bayesian Networks: Representation, Inference and Learning. Phd thesis, UC Berkeley, Computer Science Division (2002)Google Scholar
  15. 15.
    Pantic, M., Rothkrantz, L.J.: Automatic Analysis of Facial Expressions: The State of the Art. IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI) 22, 1424–1445 (2000)CrossRefGoogle Scholar
  16. 16.
    Pardàs, M., Bonafonte, A., Landabaso, J.L.: Emotion Recognition based on MPEG4 Facial Animation Parameters. In: Proceedings of Internatoinal Conference on Acoustics, Speech and Signal Procssing, vol. 4, pp. 3624–3627 (2002)Google Scholar
  17. 17.
    Tian, Y.-L., Brown, L., Hampapur, A., Pankanti, S., Senior, A.W., Bolle, R.M.: Real World Real-time Automatic Recognition of Facial Expressions. In: IEEE workshop on Performance Evaluation of Tracking and Surveillance (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Rana el Kaliouby
    • 1
  • Peter Robinson
    • 1
  1. 1.Computer LaboratoryUniversity of CambridgeCambridgeUK

Personalised recommendations