Skip to main content

EmotionML

  • Chapter
  • First Online:
Multimodal Interaction with W3C Standards

Abstract

EmotionML is a W3C recommendation to represent emotion related states in data processing systems. Given the lack of agreement in the literature on the most relevant aspects of emotion, it is important to provide a relatively rich set of descriptive mechanisms. It is possible to use EmotionML both as a standalone markup and as a plug-in annotation in different contexts. Emotions can be represented in terms of four types of descriptions taken from the scientific literature: categories, dimensions, appraisals, and action tendencies, with a single <emotion> element containing one or more of such descriptors. EmotionML provides a set of emotion vocabularies taken from the scientific and psychology literature. Whenever users have a need for a different vocabulary, however, they can simply define their own custom vocabulary and use it in the same way as the suggested vocabularies. Several applications have already been realized on the basis of EmotionML.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.w3.org/2002/mmi/2013/emotionml-ir/.

References

  1. Devillers, L., Vidrascu, L., & Lamel, L. (2005). Challenges in real-life emotion annotation and machine learning based detection. Neural Networks, 18(4), 407–422 (2005 special issue).

    Google Scholar 

  2. Tekalp, A. M., & Ostermann, J. (2000). Face and 2-D mesh animation in MPEG-4. Image Communication Journal, 15, 387–421.

    Google Scholar 

  3. Douglas-Cowie, E., Cowie, R., Sneddon, I., Cox, C., Lowry, O., McRorie, M., et al. (2007). The HUMAINE database: Addressing the collection and annotation of naturalistic and induced emotional data. In Proceedings of Affective Computing and Intelligent Interaction, Lisbon, Portugal (pp. 488–500).

    Google Scholar 

  4. Kipp, M. (2014). ANVIL: a universal video research tool. In J. Durand, U. Gut, & G. Kristofferson (Eds.), Handbook of corpus phonology, pp. 420–436. Oxford: Oxford University Press.

    Google Scholar 

  5. Schröder, M., Pirker, H., Lamolle, M., Burkhardt, F., Peter, C., & Zovato, E. (2011). Representing emotions and related states in technological systems. In P. Petta, R. Cowie, & C. Pelachaud (Eds.), Emotion-oriented systems – The humaine handbook (pp. 367–386). Berlin: Springer.

    Google Scholar 

  6. de Carolis, B., Pelachaud, C., Poggi, I., & Steedman, M. (2004). APML, a markup language for believable behavior generation. In H. Prendinger & M. Ishizuka (Eds.), Life-like characters (pp. 65–85). New York: Springer.

    Google Scholar 

  7. Gebhard, P. (2005). ALMA - A layered model of affect. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-05), Utrecht.

    Google Scholar 

  8. Ortony, A., Clore, G. L., & Collins, A. (1988). The cognitive structure of emotion. Cambridge, UK: Cambridge University Press.

    Book  Google Scholar 

  9. Schröder, M., Pirker, H., Lamolle, M, Burkhardt, F., Peter, C., & Zovato, E. (2011). Representing emotions and related states in technological systems. In Emotion-oriented systems - The humaine handbook (pp. 367–386). Berlin: Springer.

    Google Scholar 

  10. Frijda, N. H. (1986). The emotions. Cambridge, UK: Cambridge University Press.

    Google Scholar 

  11. Troncy, R., Mannens, E., Pfeiffer, S., & van Deursen, D. (2012, March 15). Media fragments URI 1.0: W3c proposed recommendation.

    Google Scholar 

  12. Cowie, R., & Cornelius, R. R. (2003). Describing the emotional states that are expressed in speech. Speech Communication, 40(1–2), 5–32.

    Article  MATH  Google Scholar 

  13. Schröder, M., Pelachaud, C., Ashimura, K., Baggia, P., Burkhardt, F., Oltramari, A., et al. (2011). Vocabularies for emotionml. http://www.w3.org/TR/emotion-voc/

  14. Cowie, R., Douglas-Cowie, E., Appolloni, B., Taylor, J., Romano, A., & Fellenz, W. (1999). What a neural net needs to know about emotion words. In N. Mastorakis (Ed.), Computational intelligence and applications (pp. 109–114). Singapore: World Scientific & Engineering Society Press.

    Google Scholar 

  15. Fontaine, J. R. J., Scherer, K. R., Roesch, E. B., & Ellsworth, P. C. (2007). The world of emotions is not two-dimensional. Psychological Science, 18(12), 1050–1057.

    Google Scholar 

  16. Frijda, N. H. (1986). The emotions. Cambridge, UK: Cambridge University Press.

    Google Scholar 

  17. Mehrabian, A. (1996). Pleasure-arousal-dominance: A general framework for describing and measuring individual differences in temperament. Current Psychology, 14(4), 261–292.

    Article  MathSciNet  Google Scholar 

  18. Scherer, K. R. (1999). Appraisal theory. In T. Dalgleish & M. J. Power (Eds.), Handbook of cognition & emotion (pp. 637–663). New York: Wiley.

    Google Scholar 

  19. Gratch, J., & Marsella, S. (2004). A domain-independent framework for modeling emotion. Cognitive Systems Research, 5(4), 269–306.

    Article  Google Scholar 

  20. Cowie, R., Sawey, M., Doherty, C., Jaimovich, J., Fyans, C., & Stapleton, P. (2013). Gtrace: General trace program compatible with emotionml. In 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 709–710). New York: IEEE.

    Chapter  Google Scholar 

  21. Burkhardt, F. (2011). Speechalyzer: A software tool to process speech data. In Proceedings of the ESSV, Elektronische Sprachsignalverarbeitung.

    Google Scholar 

  22. Burkhardt, F., Polzehl, T., Stegmann, J., Metze, F., & Huber, R. (2009). Detecting real life anger. In Proceedings ICASSP, Taipei, Taiwan (Vol. 4).

    Google Scholar 

  23. Hantke, S., Appel, T., Eyben, F., & Schuller, B. (2015). iHEARu-PLAY: Introducing a game for crowd sourced data collection for affective computing. In Proceedings of 1st International Workshop on Automatic Sentiment Analysis in the Wild (WASA 2015), Xi’an, P.R. China (pp. 891–897). New York: IEEE.

    Google Scholar 

  24. Kouroupetroglou, G., Tsonos, D., & Vlahos, E. (2009). Docemox: A system for the typography-derived emotional annotation of documents. In Universal Access in Human-Computer Interaction. Applications and Services (pp. 550–558). New York: Springer.

    Google Scholar 

  25. Eyben, F., Weninger, F., Groß, F., & Schuller, B. (2013). Recent developments in openSMILE, the Munich open-source multimedia feature extractor. In Proceedings of the 21st ACM International Conference on Multimedia, MM 2013, Barcelona, Spain (pp. 835–838). New York: ACM.

    Google Scholar 

  26. Eyben, F., Wöllmer, M., & Schuller, B. (2009, September). openEAR – Introducing the Munich open-source emotion and affect recognition toolkit. In Proceedings 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ACII 2009, Amsterdam, The Netherlands (Vol. I, pp. 576–581). HUMAINE Association. New York: IEEE.

    Google Scholar 

  27. Piana, S., Staglianò, A., Camurri, A., & Odone, F. (2013). A set of full-body movement features for emotion recognition to help children affected by autism spectrum condition. In IDGEI International Workshop.

    Google Scholar 

  28. Charfuelan, M., & Steiner, I. (2013). Expressive speech synthesis in mary tts using audiobook data and emotionml. In Proceedings of Interspeech.

    Google Scholar 

  29. Steiner, I., Schröder, M., & Klepp, A. (2013). The PAVOQUE corpus as a resource for analysis and synthesis of expressive speech. Proceedings of Phonetik & Phonologie (Vol. 9).

    Google Scholar 

  30. Bevacqua, E., Prepin, K., Niewiadomski, R., de Sevin, E., & Pelachaud, C. (2010). Greta: Towards an interactive conversational virtual companion. In Artificial Companions in Society: Perspectives on the Present and Future (pp. 143–156).

    Google Scholar 

  31. Schröder, M., Bevacqua, E., Cowie, R., Eyben, F., Gunes, H., Heylen, D., et al. (2012). Building autonomous sensitive artificial listeners. IEEE Transactions on Affective Computing, 3(2), 165–183.

    Article  Google Scholar 

  32. Munezero, M., Kakkonen, T., & Montero, C. S. (2011). Towards automatic detection of antisocial behavior from texts. In Sentiment analysis where AI meets psychology (SAAIP) (p. 20).

    Google Scholar 

  33. Burkhardt, F., Becker-Asano, C., Begoli, E., Cowie, R., Fobe, G., & Gebhard, P. (2014). Application of emotionml. In Proceedings of the 5th International Workshop on Emotion, Sentiment, Social Signals and Linked Open Data (ES3LOD).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Felix Burkhardt .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Burkhardt, F., Pelachaud, C., Schuller, B.W., Zovato, E. (2017). EmotionML. In: Dahl, D. (eds) Multimodal Interaction with W3C Standards. Springer, Cham. https://doi.org/10.1007/978-3-319-42816-1_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-42816-1_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-42814-7

  • Online ISBN: 978-3-319-42816-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics