Advertisement

The Empathy Machine

Generated Music to Augment Empathic Interactions
  • David Kadish
  • Nikolai Kummer
  • Aleksandra Dulic
  • Homayoun Najjaran
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7522)

Abstract

The Empathy Machine is an interactive installation that augments a visitor’s empathic sense during a social conversation. Empathy is a key component of interpersonal interactions that is often neglected by modern communication technologies. This system uses facial expression recognition to identify the emotional state of a user’s conversation partner. It algorithmically generates emotional music to match the expressive state of the partner and plays the music to the user in a non-disruptive manner. The result is an augmentation of the user’s emotional response to the emotional expression of their partner.

Keywords

emotional music synthesis facial expression recognition empathic interaction 

References

  1. 1.
    Gabrielsson, A., Lindström, E.: The influence of musical structure on emotional expression. In: Juslin, P.N.E., Sloboda, J.A.E. (eds.) Music and Emotion: Theory and Research. Series in affective science, pp. 223–248. Oxford University Press, New York (2001)Google Scholar
  2. 2.
    Hevner, K.: Experimental Studies of the Elements of Expression in Music. The American Journal of Psychology 48(2), 246 (1936)CrossRefGoogle Scholar
  3. 3.
    Juslin, P., Laukka, P.: Expression, Perception, and Induction of Musical Emotions: A Review and a Questionnaire Study of Everyday Listening. Journal of New Music Research 33(3), 217–238 (2004)CrossRefGoogle Scholar
  4. 4.
    Preston, S.D., de Waal, F.B.M.: Empathy: Its ultimate and proximate bases. Behavioral and Brain Sciences 25(01), 1 (2003)Google Scholar
  5. 5.
    Rickard, N.S.: Intense emotional responses to music: a test of the physiological arousal hypothesis. Psychology of Music 32(4), 371–388 (2004)CrossRefGoogle Scholar
  6. 6.
    Saragih, J., Lucey, S., Cohn, J.: Deformable model fitting by regularized landmark mean-shift. International Journal of Computer Vision, 1–16 (2011)Google Scholar
  7. 7.
    Valstar, M., Pantic, M.: Combined support vector machines and hidden markov models for modeling facial action temporal dynamics. In: Proceedings of the 2007 IEEE International Conference on Human-Computer Interaction, pp. 118–127. Springer (2007)Google Scholar
  8. 8.
    Wallis, I., Ingalls, T., Campana, E.: A Rule-Based Generative Music System Controlled by Desired Valence and Arousal. In: SMC 2011 (2011)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2012

Authors and Affiliations

  • David Kadish
    • 1
  • Nikolai Kummer
    • 1
  • Aleksandra Dulic
    • 1
  • Homayoun Najjaran
    • 1
  1. 1.University of British ColumbiaKelownaCanada

Personalised recommendations