Applying Affect Recognition in Serious Games: The PlayMancer Project

  • Maher Ben Moussa
  • Nadia Magnenat-Thalmann
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5884)


This paper presents an overview and the state-of-art in the applications of ‘affect’ recognition in serious games for the support of patients in behavioral and mental disorder treatments and chronic pain rehabilitation, within the framework of the European project PlayMancer. Three key technologies are discussed relating to facial affect recognition, fusion of different affect recognition methods, and the application of affect recognition in serious games.


Affective Computing Emotion Recognition Affect Recognition Emotion Fusion Affect Fusion Serious Games 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Darwin, C.: The expression of the emotions in man and animals John Murray, London (1872)Google Scholar
  2. 2.
    Ekman, P.: Emotion in the Human Face. Cambridge University Press, New York (1982)Google Scholar
  3. 3.
    Vukadinovic, D., Pantic, M.: Fully automatic facial feature point detection using Gabor feature based boosted classifiers. In: 2005 IEEE International Conference on Systems, Man and Cybernetics, vol. 2 (2005)Google Scholar
  4. 4.
    Tomasi, C., Kanade, T.: Detection and tracking of point features Technical Report CMU-CS-91-132, Carnegie Mellon University (1991)Google Scholar
  5. 5.
    Patras, I., Pantic, M.: Particle filtering with factorized likelihoods for tracking facial features. In: Proceedings of Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004, pp. 97–102 (2004)Google Scholar
  6. 6.
    Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International joint conference on artificial intelligence, vol. 3, pp. 674–679 (1981)Google Scholar
  7. 7.
    Cohn, J., Reed, L., Ambadar, Z., Xiao, J., Moriyama, T.: Automatic analysis and recognition of brow actions and head motion in spontaneous facial behavior. In: IEEE International Conference on Systems, Man and Cybernetics, vol. 1 (2004)Google Scholar
  8. 8.
    Zeng, Z., Hu, Y., Roisman, G.I., Wen, Z., Fu, Y., Huang, T.S.: Audio-visual spontaneous emotion recognition. In: Huang, T.S., Nijholt, A., Pantic, M., Pentland, A. (eds.) ICMI/IJCAI Workshops 2007. LNCS, vol. 4451, pp. 72–90. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  9. 9.
    Pal, P., Iyer, A., Yantorno, R.: Emotion detection from infant facial expressions and cries. In: 2006 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2006 Proceedings, vol. 2 (2006)Google Scholar
  10. 10.
    Sebe, N., Cohen, I., Gevers, T., Huang, T.: Emotion recognition based on joint visual and audio cues. In: 18th International Conference on Pattern Recognition, ICPR 2006, vol. 1 (2006)Google Scholar
  11. 11.
    Song, M., Bu, J., Chen, C., Li, N.: Audio-visual based emotion recognition-a new approach. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 2 (2004)Google Scholar
  12. 12.
    Zeng, Z., Tu, J., Pianfetti, B., Liu, M., Zhang, T., Zhang, Z., Huang, T., Levinson, S.: Audio-visual affect recognition through multi-stream fused HMM for HCI. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. 2 (2005)Google Scholar
  13. 13.
    Kim, J., Bee, N., Wagner, J., Andre, E.: Emote to win: Affective interactions with a computer game agent. In: Lecture Notes in Informatics (LNI)-Proceedings, vol. 50 (2004)Google Scholar
  14. 14.
    Kim, J., André, E., Rehm, M., Vogt, T., Wagner, J.: Integrating information from speech and physiological signals to achieve emotional sensitivity. In: Ninth European Conference on Speech Communication and Technology (2005)Google Scholar
  15. 15.
    Viola, P., Jones, M.: Robust real-time face detection. International Journal of Computer Vision 57(2), 137–154 (2004)CrossRefGoogle Scholar
  16. 16.
    Yang, M., Kriegman, D., Ahuja, N.: Detecting faces in images: A survey. IEEE Transactions on Pattern analysis and Machine intelligence, 34–58 (2002)Google Scholar
  17. 17.
    Rowley, H.: Neural Network-Based Face Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23–38 (1998)Google Scholar
  18. 18.
    Sung, K., Poggio, T.: Example-based learning for view-based human face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(1), 39–51 (1998)CrossRefGoogle Scholar
  19. 19.
    Pentland, A., Moghaddam, B., Starner, T.: View-based and modular eigenspaces for face recognition. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Proceedings CVPR 1994, pp. 84–91 (1994)Google Scholar
  20. 20.
    Schneiderman, H., Kanade, T.: A statistical model for 3d object detection applied to faces and cars. In: IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Los Alamitos (2000)Google Scholar
  21. 21.
    Gokturk, S.B., Bouguet, J.Y., Tomasi, C., Girod, B.: Model-based face tracking for view independent facial expression recognition. In: Proc. IEEE Int’l Conf. Face and Gesture Recognition, pp. 272–278 (2002)Google Scholar
  22. 22.
    Pantic, M., Patras, I.: Dynamics of facial expression: Recognition of facial actions and their temporal segments from face profile image sequences. IEEE Transactions on Systems, Man, and Cybernetics, Part B 36(2), 433–449 (2006)CrossRefGoogle Scholar
  23. 23.
    Cohen, I., Sebe, N., Cozman, F., Cirelo, M., Huang, T.: Coding, analysis, interpretation, and recognition of facial expressions. Computer Vision and Image Understanding, Special Issue on Face Recognition (2003)Google Scholar
  24. 24.
    Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with gabor wavelets. In: Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, 1998, pp. 200–205 (1998)Google Scholar
  25. 25.
    Bartlett, M., Braathen, B., Littlewort-Ford, G., Hershey, J., Fasel, I., Marks, T., Smith, E., Sejnowski, T., Movellan, J.: Automatic analysis of spontaneous facial behavior: A final project report Institute for Neural Computation MPLab TR 2001, 8 (2001)Google Scholar
  26. 26.
    Tian, Y., Kanade, T., Cohn, J.: Facial expression analysis Handbook of face recognition, pp. 247–276. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  27. 27.
    Zhang, Z., Lyons, M., Schuster, M., Akamatsu, S.: Comparison between geometry-based and Gabor-wavelets-based facial expression recognition using multi-layer perceptron. In: Proceedings of the 3rd. International Conference on Face & Gesture Recognition, p. 454 (1998)Google Scholar
  28. 28.
    Daugman, J.: Complete discrete 2-d Gabor transforms by neural networks for imageanalysis and compression. IEEE Transactions on Acoustics, Speech and signal processing 36(7), 1169–1179 (1988)zbMATHCrossRefGoogle Scholar
  29. 29.
    Patras, I., Pantic, M.: Tracking deformable motion. In: IEEE International Conference on Systems, Man and Cybernetics, vol. 2 (2005)Google Scholar
  30. 30.
    Zhang, Y., Ji, Q.: Active and dynamic information fusion for facial expression understanding from image sequences. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 699–714 (2005)Google Scholar
  31. 31.
    Gu, H., Ji, Q.: Information extraction from image sequences of real-world facial expressions. Machine Vision and Applications 16(2), 105–115 (2005)CrossRefGoogle Scholar
  32. 32.
    Su, C., Zhuang, Y., Huang, L., Wu, F.: A two-step approach to multiple facial feature tracking: Temporal particle filter and spatial belief propagation. In: Proceedings Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004, pp. 433–438.Google Scholar
  33. 33.
    Ford, G.: Fully automatic coding of basic expressions from video. Technical Report INC-MPLab-TR-2002.03, Machine Perception Lab, Institute for Neural Computation, University of California, San Diego (2002)Google Scholar
  34. 34.
    Cohn, J., Kanade, T., Moriyama, T., Ambadar, Z., Xiao, J., Gao, J., Imamura, H.: A comparative study of alternative FACS coding algorithms. Robotics Institute, Carnegie Mellon University (2001)Google Scholar
  35. 35.
    Chanel, G.: Emotion assessment for affective-computing based on brain and peripheral signals, University of Geneva (2009)Google Scholar
  36. 36.
    Garg, A., Naphade, M., Huang, T.: Modeling video using input/output Markov models with application to multi-modal event detection. Handbook of Video Databases: Design and Applications (2003)Google Scholar
  37. 37.
    Garg, A., Pavlovic, V., Rehg, J.: Boosted learning in dynamic Bayesian networks for multimodal speaker detection. Proceedings of the IEEE 91(9), 1355–1369 (2003)CrossRefGoogle Scholar
  38. 38.
    Kalapanidas, E., Watanabe, H., Davarakis, C., Kaufmann, H., Aranda, F., Lam, T., Ganchev, T., Konstantas, D.: PlayMancer: A European Serious Gaming 3D Environment. In: Conjustion with ICSOFT 2008 the 2nd International Workshop on e-health Services and Terchnologies, pp. 51–59 (2008)Google Scholar
  39. 39.
    Conconi, A., Ganchev, T., Kocsis, O., Papadopoulos, G., Fernández-Aranda, F., Jiménez-Murcia, S.: PlayMancer: A Serious Gaming 3D Environment. In: Proceedings of the 2008 International Conference on Automated solutions for Cross Media Content and Multi-channel Distribution, pp. 111–117 (2008)Google Scholar
  40. 40.
    Kalapanidas, E., Fernandez-Aranda, F., Jimenez-Murcia, S., Kocsis, O., Ganchev, T., Kaufmann, H., Davarakis, C.: Games for a cause, games for health: Where to go from here Issue on New challenges for the video game industry. Comunications and Strategies 73, 105–120 (2009)Google Scholar
  41. 41.
    Kalapanidas, E., Davarakis, C., Fernández-Aranda, F., Jiménez-Murcia, S., Kocsis, O., Ganchev, T.: PlayMancer: Games for Health with Accessibility in Mind (2009)Google Scholar
  42. 42.
    Jiménez-Murcia, S., Fernández-Aranda, F., Kalapanidas, E., Konstantas, D., Ganchev, T., Kocsis, O., Lam, T., Santamaría, J.J., Raguin, T., Breiteneder, C., Kaufmann, H., Davarakis, C.: Playmancer Project: A Serious Videogame as Additional Therapy Tool for Eating and Impulse Control Disorders (2009)Google Scholar
  43. 43.
    Beale, I., Kato, P., Marin-Bowling, V., Guthrie, N., Cole, S.: Improvement in cancer-related knowledge following use of a psychoeducational video game for adolescents and young adults with cancer. Journal of Adolescent Health 41(3), 263–270 (2007)CrossRefGoogle Scholar
  44. 44.
    Kato, P., Cole, S., Bradlyn, A., Pollock, B.: A video game improves behavioral outcomes in adolescents and young adults with cancer: a randomized trial. Pediatrics, Am Acad Pediatrics 122(2), e305 (2008)Google Scholar
  45. 45.
    Coyle, D., Matthews, M., Sharry, J., Nisbet, A., Doherty, G.: Personal investigator: a therapeutic 3D game for adolecscent psychotherapy. Interactive technology and smart education 2(2), 73–88 (2005)CrossRefGoogle Scholar
  46. 46.
    Brezinka, V., Hovestadt, L.: Serious games can support psychotherapy of children and adolescents. In: Holzinger, A. (ed.) USAB 2007. LNCS, vol. 4799, pp. 357–364. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  47. 47.
    Abt, C.: Serious Games. The Viking Press, New York (1970)Google Scholar
  48. 48.
    Jaimes, A., Sebe, N.: Multimodal Human-computer Interaction: A Survey. Computer Vision and Image Understanding 108(1-2), 116–134 (2007)CrossRefGoogle Scholar
  49. 49.
    Pantic, M., Bartlett, M.: Machine analysis of facial expressions Face recognition, pp. 377–416 (2007)Google Scholar
  50. 50.
    Tian, Y., Kanade, T., Cohn, J.: Recognizing action units for facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(2), 97–115 (2001)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Maher Ben Moussa
    • 1
  • Nadia Magnenat-Thalmann
    • 1
  1. 1. Carouge/GenevaSwitzerland

Personalised recommendations