Emotion Recognition for Affect Aware Video Games

  • Mariusz Szwoch
  • Wioleta Szwoch
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 313)


In this paper the idea of affect aware video games is presented. A brief review of automatic multimodal affect recognition of facial expressions and emotions is given. The first result of emotions recognition using depth data as well as prototype affect aware video game are presented.


Facial Expression Video Game Emotion Recognition Application Programming Interface Depth Sensor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Derek, J., Sollenberger, Â., Munindar, P.: Singh: Koko: an architecture for affect-aware games. In: Proceedings of the First International Workshop on Agents for Games and Simulations. Springer (2010)Google Scholar
  2. 2.
    Liu, X., Zhang, L., Yadegar, J.: An intelligent multi-modal affect recognition system for persistent and non-invasive personal health monitoring. In: 2011 IEEE 22nd International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), pp. 2163–2167. IEEE (2011)Google Scholar
  3. 3.
    Picard, R.: Affective Computing: From Laughter to IEEE. IEEE Transactions on Affective Computing 1(1) (2010)Google Scholar
  4. 4.
    Adams, E.: Fundamentals of Game Design. New Riders Publishing (2009)Google Scholar
  5. 5.
    Obaid, M., Han, C., Billinghurst, M.: Feed The Fish: An Affect-Aware Game. In: Australasian Conference on Interactive Entertainment 2008, Brisbane, Australia, December 3-4 (2008)Google Scholar
  6. 6.
    Leadbetter, R.: Games will detect your feelings (2011),
  7. 7.
    Russell, J.A., Mehrabian, A.: Evidence for a three-factor theory of emotions. Journal of Research in Personality, 273–294 (1977)Google Scholar
  8. 8.
    Ortony, A., Clore, G.L., Collins, A.: The cognitive structure of emotions. Cambridge University Press, Cambridge (1988)CrossRefGoogle Scholar
  9. 9.
    Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: the state of the art. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) 22(12), 1424–1445 (2000)CrossRefGoogle Scholar
  10. 10.
    Fasel, B., Luettin, J.: Automatic Facial Expression Analysis: A Survey. Pattern Recognition 36(1), 259–275 (2003)CrossRefzbMATHGoogle Scholar
  11. 11.
    Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C., Kazemzadeh, A., Lee, S., Neumann, U., Narayanan, S.: Analysis of Emotion Recognition using Facial Expressions, Speech and Multimodal Information. In: Proc. of ACM 6th International Conference on Mutlmodal Interfaces (2004)Google Scholar
  12. 12.
    Gunes, H., Piccardi, M.: Affect Recognition from Face and Body: Early Fusion vs. Late Fusion. In: Proc. IEEE Int. Conf. Systems, Man, and Cybernetics SMC, pp. 3437–3443 (2005)Google Scholar
  13. 13.
    Szwoch, M.: On Facial Expressions and Emotions RGB-D Database, BDAS (2014)Google Scholar
  14. 14.
    Szwoch, M.: FEEDB: a multimodal database of facial expressions and emotions. In: Proc. of the 6th Int. Conf. on Human System Interaction, pp. 524–531 (2013)Google Scholar
  15. 15.
    Pantic, M., Rothkrantz, L.J.M.: Toward an Affect-Sensitive Multimodal Human-Computer Interaction. Proc. of IEEE 91(9), 1370–1390 (2003)CrossRefGoogle Scholar
  16. 16.
    Kapoor, A., Picard, R.W., Ivanov, Y.: Probabilistic Combination of Multiple Modalities to Detect Interest. In: Proc. IEEE ICPR (2004)Google Scholar
  17. 17.
    Balomenos, T., Raouzaiou, A., Ioannou, S., Drosopoulos, A., Karpouzis, K., Kollias, S.D.: Emotion Analysis in Man-Machine Interaction Systems. In: Bengio, S., Bourlard, H. (eds.) MLMI 2004. LNCS, vol. 3361, pp. 318–328. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  18. 18.
    Ekman, P., Friesen, W.V.: Unmasking the face: a guide to recognizing emotions from facial clues. Prentice-Hall, Imprint Englewood Cliffs (1975)Google Scholar
  19. 19.
    Sigurdsson, S., Petersen, K., Schioler, T.: Mel Frequency Cepstral Coefficients: An Evaluation of Robustness of MP3 Encoded Music. In: Proc. Int. Conf. Music Inf. Retrieval, pp. 286–289 (2006)Google Scholar
  20. 20.
    Chen, C.W., Wang, C.C.: 3D Active Appearance Model for Aligning Faces in 2D Images. In: International Conference on Intelligent Robots and Systems IROS, pp. 3133–3139 (2008)Google Scholar
  21. 21.
    Xiao, J., Baker, S., Matthews, I., Kanade, T.: Real-Time Combined 2D+3D Active Appearance Models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 535–542 (2004)Google Scholar
  22. 22.
    Chen, L.S., et al.: Emotion recognition from audiovisual information. In: IEEE Second Workshop on Multimedia Signal Processing, December 7-9, pp. 83–88 (1998)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Gdansk University of TechnologyGdanskPoland

Personalised recommendations