Advertisement

Psychology-Aware Video-Enabled Workplace

  • Marco Anisetti
  • Valerio Bellandi
  • Ernesto Damiani
  • Fabrizio Beverina
  • Maria Rita Ciceri
  • Stefania Balzarotti
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4159)

Abstract

The ability to recognize, interpret and express emotions plays a key role in human communication. Current computer interfaces have become able to ”see”, thanks to advanced video sensors and video processing algorithms; however, until recently they could not plausibly ”guess” user intentions, because available feature extraction techniques could not provide the adequate level of service needed to support sophisticated interpretation capabilities. Our approach relies on a set of novel face and posture recognition techniques efficient and robust enough to be at the basis of a fully video-enabled intelligent pervasive workplace, capable of providing value added services based on the real time of facial and postural data. We propose to build on our current work in this area to create an infrastructure for lightweight facial and posture analysis allowing a variety of extended interactions between users and their work, market and entertainment environments.

Keywords

Facial Expression Video Stream Gesture Recognition Video Analysis Expression Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Essa, I.A., Pentland, A.: A Vision System for Observing and Extracting Facial Action Parameters. In: IEEE CVPR 1994 Conference, pp. 76–83 (1994)Google Scholar
  2. 2.
    Baker, S., Matthews, I.: Active Appearance Models Revisited. Carnegie Mellon University School of Computer Science, Pittsburg (2002)Google Scholar
  3. 3.
    Black, M.J., Yacoob, Y.: Recognizing facial expressionin image sequences using local parameterized. Int’l j. Computer Vision 25, 23–48 (1997)CrossRefGoogle Scholar
  4. 4.
    Tian, Y., Kanade, T., Cohn, J.: Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 97–115 (February 2001)Google Scholar
  5. 5.
    Turk, M., Pentland, A.: Eigenfaces for recognition. Journal of Cognitive Neuroscience 3, 71–86 (1991)CrossRefGoogle Scholar
  6. 6.
    Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence 19, 711–720 (1997)CrossRefGoogle Scholar
  7. 7.
    Bregler, C., Malik, J.: Tracking People with Twists and exponential maps. In: CVPR 1998, pp. 8–15 (1998)Google Scholar
  8. 8.
    Hager, G.D., Belhumeur, P.N.: Efficent Region Tracking With Parametric Models of Geometry and Illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1125–1139 (1998)Google Scholar
  9. 9.
    Baker, S., Matthews, I., Xiao, J., Gross, R., Kanade, T., Ishikawa, T.: Real-Time Non-Rigid Driver Head Tracking for Driver Mental State Estimation. In: 11th World Congress on Intelligent Transportation Systems (October 2004)Google Scholar
  10. 10.
    Xiao, J., Kanade, T., Cohn, J.F.: Robust Full-Motion Recovery of Head by Dynamic Templates and Re-registration Techniques. In: Proceeding of the fifth IEEE international Conference on automatic face and gesture recognition (2002)Google Scholar
  11. 11.
    Lu, X., Jain, A.K., Colbry, D.O.: Matching 2.5D Face Scans to 3D Models. IEEE Transaction on pattern analysis and machine intelligence 28 (2006)Google Scholar
  12. 12.
    Blanz, V., Vetter, T.: Face Recognition Based on Fitting a 3D Morphable Model. IEEE Transaction on pattern analysis and machine intelligence 25 (2003)Google Scholar
  13. 13.
    Vlasic, D., Brand, M., Pfister, H., Popovic, J.: Face Transfer with Multilinear Models. In: SIGGRAPH (2005)Google Scholar
  14. 14.
    Feris, R.S., Gemmell, J., Toyama, K., Kruger, V.: Facial features detection using a Hierarchical Wavelet Face Database. In: 5th Int. conf. of an Automatic Face and Gesture RecognitionGoogle Scholar
  15. 15.
    Ekman, P., Friesen, W.: Facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press (1978)Google Scholar
  16. 16.
    Perez, P., Vermaak, J., Blake, A.: Data fusion for Visual Tracking with particles. Proc of the IEEE, 495–513 (2004)Google Scholar
  17. 17.
    Damiani, E., Anisetti, M., Bellandi, V., Beverina, F.: Facial identification problem: A tracking based approach. In: IEEE International Symposium on Signal-Image Technology and Internet-Based Systems (IEEE SITIS 2005), Yaoundé Cameroon (2005)Google Scholar
  18. 18.
    Anisetti, M., Bellandi, V., Damiani, E., Beverina, F.: 3D espressive face model-based tracking algorithm. Signal Processing, Pattern Recognition, And Applications (SPPRA 2006). Innsbruck (2006)Google Scholar
  19. 19.
    Bellandi, V., Anisetti, M., Beverina, F.: Upper-Face Expression Features Extraction System for Video Sequences. In: Visualization Imaging, and Image Processing (VIIP 2005), Benidorm, Spain, September 2005, pp. 83–88 (2005)Google Scholar
  20. 20.
    Picard, R.W.: Affective Computing. MIT Press, Cambridge (1997)Google Scholar
  21. 21.
    Picard, R.W.: What does it mean for a computer to ”have” emotions? In: Trappl, R., Petta, P., Payr, S. (eds.) Emotions in Humans and Artifacts, MIT Press, Cambridge (2003)Google Scholar
  22. 22.
    Scherer, K.R.: Appraisal considered as a process of multilevel sequential checking. In: Scherer, K., Schorr, A., Johnstone, T. (eds.) Appraisal processes in emotion: Theory, methods, research. Series in affective science, vol. xiv, p. 478. Oxford University Press, London (2001)Google Scholar
  23. 23.
    Ciceri, R., Balzarotti, S.: Analysis of the human physiological responses and emotional multimodal signals to an interactive computer. In: Agents that Want and Like: Motivational and Emotional Roots of Cognition and Action. UH The University of Hertfordshire, Hattfiled, 25–34 (2005) ISBN 902956 41 7Google Scholar
  24. 24.
    Reeves, B., Nass, C.: The Media Equation. How People Treat Computers, Television and New Media Like Real People and Places. CSLI Publications, Centre for the Study of Language and Information, Cambridge University Press, Cambridge (1998)Google Scholar
  25. 25.
    Gratch, J., Marcella, S.: Evaluating the modelling and use of emotion in virtual humans. In: Proceedings of the 3rd International joint Conference on Autonomous Agents and Multi agent Systems (2004)Google Scholar
  26. 26.
    Picard, R.W., Klein, J.: Computers that Recognise and Respond to User Emotion: Theoretical and Practical Implications. Interacting with Computers, 141–169 (2002)Google Scholar
  27. 27.
    Baker, S., Matthews, I.: Lucas-kanade 20 years on: A unifying framework. International Journal of Computer Vision 56, 221–255 (2004)CrossRefGoogle Scholar
  28. 28.
    Kanade, T., Cohn, J., Tian, Y.: Comprehensive database for facial expression analysis. In: Proc. 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2000), Grenoble, France, pp. 46–53 (2000)Google Scholar
  29. 29.
    DeMenthon, D., Davis, L.: Model-based object pose in 25 lines of code. International journal of computer vision, IJCV 15, 123–141 (1995)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Marco Anisetti
    • 1
  • Valerio Bellandi
    • 1
  • Ernesto Damiani
    • 1
  • Fabrizio Beverina
    • 2
  • Maria Rita Ciceri
    • 3
  • Stefania Balzarotti
    • 3
  1. 1.Department of Information TechnologyUniversity of MilanCrema (CR)Italy
  2. 2.Research groupSTMicroelectronics Advanced SystemItaly
  3. 3.Communication Psychology LABCatholic University of MilanItaly

Personalised recommendations