DocEmoX: A System for the Typography-Derived Emotional Annotation of Documents

  • Georgios Kouroupetroglou
  • Dimitrios Tsonos
  • Eugenios Vlahos
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5616)

Abstract

This work presents the design and implementation of the DocEmoX system for the automated typography-derived emotional extraction and annotation of printed and electronic documents. The DocEmoX system targets the Design-for-All based multimodal accessibility of documents. The methodology is based on the results derived from a number of readers’ emotional state response experiments that model the mapping of any combination of typographic elements into specific analogous variations of the three emotional dimensions (Valence/Pleasure, Arousal and Potency/ Dominance) using a set of Emotional Rules. DocEmoX implements these Emotional Rules in XSL format and produces the annotated output document following the ODF standard and the W3C EmotionML recommendations.

Keywords

document accessibility emotional text-to-speech emotional state modeling typography EmotionML ODF 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    McLuhan, M., Fiore, Q.: The Medium is the Message. Gingko Press (2005)Google Scholar
  2. 2.
    Kouroupetroglou, G., Tsonos, D.: Multimodal Accessibility of Documents. Chapter in the book: Advances in Human-Computer Interaction, pp. 451–470. I-Tech Education and Publishing, Vienna (2008)Google Scholar
  3. 3.
    Laarni, J.: Effects of color, font type and font style on user preferences. In: Stephanidis, C. (ed.) Adjunct Proceedings of HCI International 2003, pp. 31–32. Crete University Press, Heraklion (2003)Google Scholar
  4. 4.
    Tsonos, D., Ikospentaki, K., Kouroupetrglou, G.: Towards Modeling of Readers’ Emotional State Response for the Automated Annotation of Documents. In: Proceedings of the IEEE World Congress on Computational Intelligence (WCCI 2008), Hong Kong, June 1-6, pp. 3252–3259 (2008)Google Scholar
  5. 5.
    Boucouvalas, A.C.: Real Time Text-to-Emotion Engine for Expressive Internet Communications. Emerging Communication 5, 305–320 (2003)Google Scholar
  6. 6.
    Zhe, X., John, D., Boucouvalas, A.C.: User preferences of a text-to-emotion engine. In: Proceedings of the IEEE International Symposium on Consumer Electronics (ISCE 2002), Germany, September 23-26, pp. B25–B30 (2002)Google Scholar
  7. 7.
    Wu, C., Chuang, Z., Lin, Y.: Emotion recognition from text using semantic labels and separable mixture models. ACM Transactions on Asian Language Information Processing (TALIP) 5(2), 165–183 (2006)CrossRefGoogle Scholar
  8. 8.
    Min, J.H., Park, J.C.: Representing Emotions with Linguistic Acuity. In: Gelbukh, A. (ed.) CICLing 2007. LNCS, vol. 4394, pp. 348–360. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  9. 9.
    Sánchez, J.A., Hernández, N.P., Penagos, J.C., Ostróvskaya, Y.: Conveying mood and emotion in instant messaging by using a two-dimensional model for affective states. In: Proceedings of the Symposium on Human Factors in Computer Systems (IHC 2006), Natal, Brazil, pp. 66–72. ACM, New York (2006)Google Scholar
  10. 10.
    Freitas, D., Kouroupetroglou, G.: Speech Technologies for Blind and Low Vision Persons. Technology and Disability 20(2), 135–156 (2008)Google Scholar
  11. 11.
    Owsley, S., Sood, S., Hammond, K.J.: Domain Specific Affective Classification of Documents. In: Proceedings of the AAAI-CAAW 2006, the Spring Symposia on Computational Approaches to Analyzing Weblogs (2006)Google Scholar
  12. 12.
    Lin, K.H., Yang, C., Chen, H.: What emotions do news articles trigger in their readers? In: Proceedings of the 30th Annual international ACM SIGIR Conference on Research and Development in information Retrieval, pp. 733–734. ACM, New York (2007)Google Scholar
  13. 13.
    W3C, World Wide Web Consortium, http://www.w3c.org
  14. 14.
    ODF, Open Document Format, www.oasis-open.org
  15. 15.
    Tsonos, D., Xydas, G., Kouroupetroglou, G.: A Methodology for Reader’s Emotional State Extraction to Augment Expressions in Speech Synthesis. In: Procedings of the 19th IEEE International Conference on Tools with Artificial Intelligence 2007 (ICTAI2007), Patras, Greece, October 29-31, vol. II, pp. 218–225 (2007)Google Scholar
  16. 16.
    Tsonos, D., Kouroupetroglou, G.: A Methodology for the Extraction of Reader’s Emotional State Triggered from Text Typography. In: The book: Tools in Artificial Intelligence, pp. 439–454. I-Tech Education and Publishing, Vienna (2008)Google Scholar
  17. 17.
  18. 18.
    Scherer, K.R.: What are emotions? And how can they be measured? Social Science Information 44(4), 693–727 (2005)CrossRefGoogle Scholar
  19. 19.
    DAISY/NISO standard, the ANSI/NISO Z39.86 Specifications for the Digital Talking Book, http://www.daisy.org/z3986/
  20. 20.
    Fellbaum, K., Kouroupetroglou, G.: Principles of Electronic Speech Processing with Applications for People with Disabilities. Technology and Disability 20(2), 55–85 (2008)Google Scholar
  21. 21.
  22. 22.
    Odt2dtbook, OpenOffice.org Writer extension, http://odt2dtbook.sourceforge.net/
  23. 23.
    Sun ODF Plug-in for Microsoft Office, http://www.sun.com/software/star/odf_plugin/
  24. 24.
    Tsonos, D., Kouroupetroglou, G.: Accessibility of Board and Presentations in the Classroom: a Design-for-All Approach. In: Proceedings of the IASTED International Conference on Assistive Technologies (AT 2008), Baltimore, Maryland, USA, April 16-18, pp. 13–18 (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Georgios Kouroupetroglou
    • 1
  • Dimitrios Tsonos
    • 1
  • Eugenios Vlahos
    • 1
  1. 1.Department of Informatics and TelecommunicationsNational and Kapodistrian University of AthensAthensGreece

Personalised recommendations