Construction and Evaluation of Text-Dialog Corpus with Emotion Tags Focusing on Facial Expression in Comics
Large-scale text-dialog corpora with emotion tags are required to generate a knowledge base for emotional reasoning from text. Annotating emotion tags is known to suffer from problems with instability. These are caused by the lack of non-linguistic expressions (e.g. speech and facial expressions) in the text dialog. We aimed to construct a stable, usable text-dialog corpus with emotion tags. We first focused on facial expression in comics. Some comics contain many text dialogs that are similar to everyday conversation, and it is worth analyzing their text. We therefore extracted 29,538 sentences from 10 comic books and annotated face tags and emotion tags. Two annotators independently placed “temporary face/emotion tags” on stories and then decided what the “correct face/emotion tags” were by discussing them with each other. They acquired 16,635 correct emotion tags as a result. We evaluated the stability and usability of the corpus. We evaluated the correspondence between temporary and correct tags to assess stability, and found precision was 83.8% and recall was 78.8%. These were higher than for annotation without facial expressions (precision = 56.2%, recall = 51.5%). We extracted emotional suffix expressions from the corpus using a probabilistic method to evaluate usability. We could thus construct a text-dialog corpus with emotion tags and confirm its stability and usability.
KeywordsFacial Expression Comic Book Natural Language Generation Emotional Reasoning Everyday Conversation
Unable to display preview. Download preview PDF.
- 1.Chambers, N., Tetreault, J., Allen, J.: Approaches for automatically tagging affect. In: Exploring attitude and affect in text: Theories and applications, pp. 36–43. AAAI Press, Menlo Park (2004)Google Scholar
- 2.Ekman, P., Friesen, W.V.: Unmasking the face. Prentice-Hall, Inc., Englewood Cliffs (1975); Japanese translation version: Kudo, T., Matsumoto, D., Shimomura, Y., Ichimura, E., Shobou, S. (1990) Google Scholar
- 3.Grefenstette, G., Qu, Y., Evans, D.A., Shanahan, J.G.: Validating the coverage of lexical resources for affect analysis and automatically classifying new words along semantic axes. In: Exploring attitude and affect in text: Theories and applications, pp. 63–70. AAAI Press, Menlo Park (2004)Google Scholar
- 4.Litman, D., Forbes, K.: Recognizing emotions from student speech in tutoring dialogues. In: Automatic Speech Recognition and Understanding Workshop (2003)Google Scholar
- 5.Ortony, A., Clore, G.L., Collins, A.: The Cognitive Structure of Emotions. Cambridge University Press, Cambridge (1988)Google Scholar
- 8.Tokuhisa, M., Tokuhisa, R., Inui, K., Okada, N.: Emotion recognition in dialogue. In: Hatano, G., et al. (eds.) Affective Minds, pp. 221–229. Elsevier Science, Amsterdam (2000)Google Scholar
- 9.Tokuhisa, M., Tanaka, T., Ikehara, S., Murakami, J.: Emotion reasoning based on valency patterns - prototype annotation of causal relationships. In: Human and Artificial Intelligence Systems, pp. 534–539 (2004)Google Scholar