Advertisement

Estimation of Dialogue Moods Using the Utterance Intervals Features

  • Kaoru Toyoda
  • Yoshihiro Miyakoshi
  • Ryosuke Yamanishi
  • Shohei Kato
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 14)

Abstract

Many recent studies have focused on dialogue communication. In this paper, our target is to make a robot support a communication between humans. To support a communication between humans, we believe that there are two important functions: estimating dialogue moods and behaving suitably. In this paper, we propose dialogue mood estimation model using the utterance intervals. The proposed estimation model is composed by relating the subjective evaluations for several adjectives with the utterance intervals features. Through the estimation experiments, we confirmed that the proposed system could estimate the dialogue moods with a high degree of accuracy, especially for “excitement,” “seriousness,” and “closeness.” And we suggested that the utterance intervals features had a high potential for the dialogue mood estimation.

Keywords

Facial Expression Recognition Back Ground Music Negative Accuracy Rate Communication Robot Utterance Interval 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Akaike, H.: Information theory and an extension of the maximum likelihood principle. In: 2nd Inter. Symp. on Information Theory, vol. 1, pp. 267–281 (1973)Google Scholar
  2. 2.
    Bruner, G.: Music, mood, and marketing. Journal of Marketing 54(4), 94–104 (1990)CrossRefGoogle Scholar
  3. 3.
    Cohen, I., Sebe, N., Chen, L., Garg, A., Huang, T.S.: Facial expression recognition from video sequences: Temporal and static modelling. In: Computer Vision and Image Understanding, pp. 160–187 (2003)Google Scholar
  4. 4.
    Consortium, N.T.S.R. Priority area spoken dialogue spoken dialogue corpus (pasd) (1993-1996) Google Scholar
  5. 5.
    Gatica-perez, D., Mccowan, I., Zhang, D., Bengio, S.: Detecting group interest level in meetings. In: Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pp. 489–492 (2005)Google Scholar
  6. 6.
    Hayashi, T., Kato, S., Itoh, H.: A Synchronous Model of Mental Rhythm Using Paralanguage for Communication Robots. In: Yang, J.-J., Yokoo, M., Ito, T., Jin, Z., Scerri, P. (eds.) PRIMA 2009. LNCS, vol. 5925, pp. 376–388. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  7. 7.
    Ito, H., Shigeno, S., Nishimoto, T., Araki, M., Nimi, Y.: The analysis of the atmosphere in the dialogues. IPSJ SIG Technical Report, pp. 103–108 (2011) (in Japanese) Google Scholar
  8. 8.
    Itoh, C., Kato, S., Itoh, H.: Mood-transition-based emotion generation model for the robot’s personality. In: Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2009), San Antonio, TX, USA, pp. 2957–2962 (2009)Google Scholar
  9. 9.
    Mori, H., Miyawaki, K., Nishiguchi, S., Sano, M., Yamashita, N.: An affections model of group activities for estimation of individual’s affection. IEICE Technical Report, pp. 519–523 (2010)Google Scholar
  10. 10.
    Siedlecki, W., Sklansky, J.: A note on genetic algorithms for large-scale feature selection. Pattern Recogn. Lett. 10, 335–347 (1989)zbMATHCrossRefGoogle Scholar
  11. 11.
    Takasugi, S., Yoshida, S., Okitsu, K., Yokoyama, M., Yamamoto, T., Miyake, Y.: Influence of pause duration and nod response timing in dialogue between human and communication robot. Transactions of the Society of Instrument and Control Engineers, 72–81 (2010) (in Japanese)Google Scholar
  12. 12.
    Toyoda, K., Yamanishi, R., Kato, S.: Song selection system with affective requests. In: 12th International Symposium on Advanced Intelligent Systems, Suwon, Korea, pp. 462–465 (2011)Google Scholar
  13. 13.
    Wrede, B., Shriberg, E.: The relationship between dialogue acts and hot spots in meetings. In: Proc. IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Virgin Islands (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Kaoru Toyoda
    • 1
  • Yoshihiro Miyakoshi
    • 1
  • Ryosuke Yamanishi
    • 1
  • Shohei Kato
    • 1
  1. 1.Dept. of Computer Science and Engineering, Graduate School of EngineeringNagoya Institute of TechnologyNagoyaJapan

Personalised recommendations