Advertisement

Influence of Personal Characteristics on Nonverbal Information for Estimating Communication Smoothness

  • Yumi WakitaEmail author
  • Yuta Yoshida
  • Mayu Nakamura
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9733)

Abstract

To realize a system that can provide a new topic of discussion for improving lively and smooth human-to-human communication, a method to estimate conversation smoothness is necessary. To develop a process for estimating conversation smoothness, we confirmed the effectiveness of using fundamental frequency (F0). The analytic results of free dyadic conversation using the F0 of laughter utterances in conversation are strongly dependent on personal characteristics. Moreover, F0s without laughter utterances are effective in estimating conversation smoothness.

Both the average value and the standard deviation (SD) value of F0s in smooth conversation tend to be higher than in non-smooth conversation. The differences between the SD of the “smooth” and “non-smooth” segments are shown to be significant when using a t-test, where the confidence level is 95 %.

Keywords

Conversation smoothness Nonverbal information Fundamental frequency 

References

  1. 1.
    DiMicco, J.M., Pandolfo, A., Bender, W.: Influencing group participation with a shared display. In: Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work, CSCW 2004, pp. 614–623. ACM Press, New York, NY, USA (2004)Google Scholar
  2. 2.
    Kim, T., Chang, A., Holland, L., Pentland, A.S.: Meeting mediator: enhancing group collaborationusing sociometric feedback. In: Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work, pp. 457–466 (2008)Google Scholar
  3. 3.
    Krauss, R.M., Chen, Y., Gottesman, R.F.: Lexical gestures and lexical access: a process model. In: Language and Gesture, pp. 261–283. Cambridge University Press, Cambridge (2000)Google Scholar
  4. 4.
    Wrede, B., Shiriberg, E.: Spotting “hot spots’’ in meetings: human judgments and prosodic cues. In: Proceeding of INTERSPEECH, pp. 2805–2808 (2003)Google Scholar
  5. 5.
    Gatica-Perez, D., McCowan, I.A., Zhang, D., Bengio, S.: Detecting group interest-level in meetings. IDIAP Research report 04-51, IDIAP, pp. 1–8 (2004)Google Scholar
  6. 6.
    Jayagopi, D.B., Ba, S., Odobez, J.M., Gatica-Perez, D.: Predicting two facets of social verticality in meetings from five-minute time slices and nonverbal cues. In: Proceedings of the 10th International Conference on Multimodal Interfaces, pp. 45–52 (2008)Google Scholar
  7. 7.
    Toyoda, K., Miyakoshi, Y., Yamanishi, R., Kato, S.: Estimation of dialogue moods using the utterance intervals features. In: Watanabe, T., Watada, J., Takahashi, N., Howlett, R.J., Jain, L.C. (eds.) Intelligent Interactive Multimedia: Systems and Services - IIMSS 2012. Smart Innovation Systems and Technologies, vol. 14, pp. 245–254. Springer, Heildberg (2012)CrossRefGoogle Scholar
  8. 8.
    Nishio, S., Koyama, K.: A criterion for facial expression of laugh based on temporal difference of eye and mouth movement. IEICE 80(8), 1316–1318 (1997)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Osaka Institute of TechnologyOsakaJapan

Personalised recommendations