Estimation of Dialogue Moods Using the Utterance Intervals Features
Many recent studies have focused on dialogue communication. In this paper, our target is to make a robot support a communication between humans. To support a communication between humans, we believe that there are two important functions: estimating dialogue moods and behaving suitably. In this paper, we propose dialogue mood estimation model using the utterance intervals. The proposed estimation model is composed by relating the subjective evaluations for several adjectives with the utterance intervals features. Through the estimation experiments, we confirmed that the proposed system could estimate the dialogue moods with a high degree of accuracy, especially for “excitement,” “seriousness,” and “closeness.” And we suggested that the utterance intervals features had a high potential for the dialogue mood estimation.
KeywordsFacial Expression Recognition Back Ground Music Negative Accuracy Rate Communication Robot Utterance Interval
Unable to display preview. Download preview PDF.
- 1.Akaike, H.: Information theory and an extension of the maximum likelihood principle. In: 2nd Inter. Symp. on Information Theory, vol. 1, pp. 267–281 (1973)Google Scholar
- 3.Cohen, I., Sebe, N., Chen, L., Garg, A., Huang, T.S.: Facial expression recognition from video sequences: Temporal and static modelling. In: Computer Vision and Image Understanding, pp. 160–187 (2003)Google Scholar
- 4.Consortium, N.T.S.R. Priority area spoken dialogue spoken dialogue corpus (pasd) (1993-1996) Google Scholar
- 5.Gatica-perez, D., Mccowan, I., Zhang, D., Bengio, S.: Detecting group interest level in meetings. In: Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 489–492 (2005)Google Scholar
- 7.Ito, H., Shigeno, S., Nishimoto, T., Araki, M., Nimi, Y.: The analysis of the atmosphere in the dialogues. IPSJ SIG Technical Report, pp. 103–108 (2011) (in Japanese) Google Scholar
- 8.Itoh, C., Kato, S., Itoh, H.: Mood-transition-based emotion generation model for the robot’s personality. In: Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2009), San Antonio, TX, USA, pp. 2957–2962 (2009)Google Scholar
- 9.Mori, H., Miyawaki, K., Nishiguchi, S., Sano, M., Yamashita, N.: An affections model of group activities for estimation of individual’s affection. IEICE Technical Report, pp. 519–523 (2010)Google Scholar
- 11.Takasugi, S., Yoshida, S., Okitsu, K., Yokoyama, M., Yamamoto, T., Miyake, Y.: Influence of pause duration and nod response timing in dialogue between human and communication robot. Transactions of the Society of Instrument and Control Engineers, 72–81 (2010) (in Japanese)Google Scholar
- 12.Toyoda, K., Yamanishi, R., Kato, S.: Song selection system with affective requests. In: 12th International Symposium on Advanced Intelligent Systems, Suwon, Korea, pp. 462–465 (2011)Google Scholar
- 13.Wrede, B., Shriberg, E.: The relationship between dialogue acts and hot spots in meetings. In: Proc. IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Virgin Islands (2003)Google Scholar