EmotionML

  • Felix Burkhardt
  • Catherine Pelachaud
  • Björn W. Schuller
  • Enrico Zovato
Chapter

Abstract

EmotionML is a W3C recommendation to represent emotion related states in data processing systems. Given the lack of agreement in the literature on the most relevant aspects of emotion, it is important to provide a relatively rich set of descriptive mechanisms. It is possible to use EmotionML both as a standalone markup and as a plug-in annotation in different contexts. Emotions can be represented in terms of four types of descriptions taken from the scientific literature: categories, dimensions, appraisals, and action tendencies, with a single <emotion> element containing one or more of such descriptors. EmotionML provides a set of emotion vocabularies taken from the scientific and psychology literature. Whenever users have a need for a different vocabulary, however, they can simply define their own custom vocabulary and use it in the same way as the suggested vocabularies. Several applications have already been realized on the basis of EmotionML.

Keywords

Emotion Recognition Sentiment Analysis Action Tendency Emotion Category Markup Language 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Devillers, L., Vidrascu, L., & Lamel, L. (2005). Challenges in real-life emotion annotation and machine learning based detection. Neural Networks, 18(4), 407–422 (2005 special issue).Google Scholar
  2. 2.
    Tekalp, A. M., & Ostermann, J. (2000). Face and 2-D mesh animation in MPEG-4. Image Communication Journal, 15, 387–421.Google Scholar
  3. 3.
    Douglas-Cowie, E., Cowie, R., Sneddon, I., Cox, C., Lowry, O., McRorie, M., et al. (2007). The HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data. In Proceedings of Affective Computing and Intelligent Interaction, Lisbon, Portugal (pp. 488–500).Google Scholar
  4. 4.
    Kipp, M. (2014). ANVIL: a universal video research tool. In J. Durand, U. Gut, & G. Kristofferson (Eds.), Handbook of corpus phonology, pp. 420–436. Oxford: Oxford University Press.Google Scholar
  5. 5.
    Schröder, M., Pirker, H., Lamolle, M., Burkhardt, F., Peter, C., & Zovato, E. (2011). Representing emotions and related states in technological systems. In P. Petta, R. Cowie, & C. Pelachaud (Eds.), Emotion-oriented systems – The humaine handbook (pp. 367–386). Berlin: Springer.Google Scholar
  6. 6.
    de Carolis, B., Pelachaud, C., Poggi, I., & Steedman, M. (2004). APML, a markup language for believable behavior generation. In H. Prendinger & M. Ishizuka (Eds.), Life-like characters (pp. 65–85). New York: Springer.Google Scholar
  7. 7.
    Gebhard, P. (2005). ALMA - A layered model of affect. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-05), Utrecht.Google Scholar
  8. 8.
    Ortony, A., Clore, G. L., & Collins, A. (1988). The cognitive structure of emotion. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
  9. 9.
    Schröder, M., Pirker, H., Lamolle, M, Burkhardt, F., Peter, C., & Zovato, E. (2011). Representing emotions and related states in technological systems. In Emotion-oriented systems - The humaine handbook (pp. 367–386). Berlin: Springer.Google Scholar
  10. 10.
    Frijda, N. H. (1986). The emotions. Cambridge, UK: Cambridge University Press.Google Scholar
  11. 11.
    Troncy, R., Mannens, E., Pfeiffer, S., & van Deursen, D. (2012, March 15). Media fragments URI 1.0: W3c proposed recommendation.Google Scholar
  12. 12.
    Cowie, R., & Cornelius, R. R. (2003). Describing the emotional states that are expressed in speech. Speech Communication, 40(1–2), 5–32.CrossRefMATHGoogle Scholar
  13. 13.
    Schröder, M., Pelachaud, C., Ashimura, K., Baggia, P., Burkhardt, F., Oltramari, A., et al. (2011). Vocabularies for emotionml. http://www.w3.org/TR/emotion-voc/
  14. 14.
    Cowie, R., Douglas-Cowie, E., Appolloni, B., Taylor, J., Romano, A., & Fellenz, W. (1999). What a neural net needs to know about emotion words. In N. Mastorakis (Ed.), Computational intelligence and applications (pp. 109–114). Singapore: World Scientific & Engineering Society Press.Google Scholar
  15. 15.
    Fontaine, J. R. J., Scherer, K. R., Roesch, E. B., & Ellsworth, P. C. (2007). The world of emotions is not two-dimensional. Psychological Science, 18(12), 1050–1057.Google Scholar
  16. 16.
    Frijda, N. H. (1986). The emotions. Cambridge, UK: Cambridge University Press.Google Scholar
  17. 17.
    Mehrabian, A. (1996). Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psychology, 14(4), 261–292.MathSciNetCrossRefGoogle Scholar
  18. 18.
    Scherer, K. R. (1999). Appraisal theory. In T. Dalgleish & M. J. Power (Eds.), Handbook of cognition & emotion (pp. 637–663). New York: Wiley.Google Scholar
  19. 19.
    Gratch, J., & Marsella, S. (2004). A domain-independent framework for modeling emotion. Cognitive Systems Research, 5(4), 269–306.CrossRefGoogle Scholar
  20. 20.
    Cowie, R., Sawey, M., Doherty, C., Jaimovich, J., Fyans, C., & Stapleton, P. (2013). Gtrace: General trace program compatible with emotionml. In 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 709–710). New York: IEEE.CrossRefGoogle Scholar
  21. 21.
    Burkhardt, F. (2011). Speechalyzer: A software tool to process speech data. In Proceedings of the ESSV, Elektronische Sprachsignalverarbeitung.Google Scholar
  22. 22.
    Burkhardt, F., Polzehl, T., Stegmann, J., Metze, F., & Huber, R. (2009). Detecting real life anger. In Proceedings ICASSP, Taipei, Taiwan (Vol. 4).Google Scholar
  23. 23.
    Hantke, S., Appel, T., Eyben, F., & Schuller, B. (2015). iHEARu-PLAY: Introducing a game for crowd sourced data collection for affective computing. In Proceedings of 1st International Workshop on Automatic Sentiment Analysis in the Wild (WASA 2015), Xi’an, P.R. China (pp. 891–897). New York: IEEE.Google Scholar
  24. 24.
    Kouroupetroglou, G., Tsonos, D., & Vlahos, E. (2009). Docemox: A system for the typography-derived emotional annotation of documents. In Universal Access in Human-Computer Interaction. Applications and Services (pp. 550–558). New York: Springer.Google Scholar
  25. 25.
    Eyben, F., Weninger, F., Groß, F., & Schuller, B. (2013). Recent developments in openSMILE, the Munich open-source multimedia feature extractor. In Proceedings of the 21st ACM International Conference on Multimedia, MM 2013, Barcelona, Spain (pp. 835–838). New York: ACM.Google Scholar
  26. 26.
    Eyben, F., Wöllmer, M., & Schuller, B. (2009, September). openEAR – Introducing the Munich open-source emotion and affect recognition toolkit. In Proceedings 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009, Amsterdam, The Netherlands (Vol. I, pp. 576–581). HUMAINE Association. New York: IEEE.Google Scholar
  27. 27.
    Piana, S., Staglianò, A., Camurri, A., & Odone, F. (2013). A set of full-body movement features for emotion recognition to help children affected by autism spectrum condition. In IDGEI International Workshop.Google Scholar
  28. 28.
    Charfuelan, M., & Steiner, I. (2013). Expressive speech synthesis in mary tts using audiobook data and emotionml. In Proceedings of Interspeech.Google Scholar
  29. 29.
    Steiner, I., Schröder, M., & Klepp, A. (2013). The PAVOQUE corpus as a resource for analysis and synthesis of expressive speech. Proceedings of Phonetik & Phonologie (Vol. 9).Google Scholar
  30. 30.
    Bevacqua, E., Prepin, K., Niewiadomski, R., de Sevin, E., & Pelachaud, C. (2010). Greta: Towards an interactive conversational virtual companion. In Artificial Companions in Society: Perspectives on the Present and Future (pp. 143–156).Google Scholar
  31. 31.
    Schröder, M., Bevacqua, E., Cowie, R., Eyben, F., Gunes, H., Heylen, D., et al. (2012). Building autonomous sensitive artificial listeners. IEEE Transactions on Affective Computing, 3(2), 165–183.CrossRefGoogle Scholar
  32. 32.
    Munezero, M., Kakkonen, T., & Montero, C. S. (2011). Towards automatic detection of antisocial behavior from texts. In Sentiment analysis where AI meets psychology (SAAIP) (p. 20).Google Scholar
  33. 33.
    Burkhardt, F., Becker-Asano, C., Begoli, E., Cowie, R., Fobe, G., & Gebhard, P. (2014). Application of emotionml. In Proceedings of the 5th International Workshop on Emotion, Sentiment, Social Signals and Linked Open Data (ES3LOD).Google Scholar

Copyright information

© Springer International Publishing Switzerland 2017

Authors and Affiliations

  • Felix Burkhardt
    • 1
  • Catherine Pelachaud
    • 2
  • Björn W. Schuller
    • 3
    • 4
  • Enrico Zovato
    • 5
  1. 1.Telekom Innovation LaboratoriesBerlinGermany
  2. 2.LTCI, CNRS, Télécom ParisTechUniversité Paris-SaclayParisFrance
  3. 3.Imperial CollegeLondonUK
  4. 4.University of Passau, Chair CISPassauGermany
  5. 5.NuanceTurinItaly

Personalised recommendations