Advertisement

Boredom Recognition Based on Users’ Spontaneous Behaviors in Multiparty Human-Robot Interactions

  • Yasuhiro Shibasaki
  • Kotaro FunakoshiEmail author
  • Koichi Shinoda
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10132)

Abstract

To recognize boredom in users interacting with machines is valuable to improve user experiences in human-machine long term interactions, especially for intelligent tutoring systems, health-care systems, and social assistants. This paper proposes a two-staged framework and feature design for boredom recognition in multiparty human-robot interactions. At the first stage the proposed framework detects boredom-indicating user behaviors based on skeletal data obtained by motion capture, and then it recognizes boredom in combination with detection results and two types of multiparty information, i.e., gaze direction to other participants and incoming-and-outgoing of participants. We experimentally confirmed the effectiveness of both the proposed framework and the multiparty information. In comparison with a simple baseline method, the proposed framework gained 35% points in the F1 score.

Keywords

Gesture Posture Gaze Spoken dialogue 

References

  1. 1.
    Bixler, R., D’Mello, S.: Detecting boredom and engagement during writing with keystroke analysis, task appraisals, and stable traits. In: 2013 International Conference on Intelligent User Interfaces, pp. 225–234 (2013)Google Scholar
  2. 2.
    Brand, M., Oliver, N., Pentland, A.: Coupled hidden markov models for complex action recognition. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 994–999 (1997)Google Scholar
  3. 3.
    Breazeal, C., Aryananda, L.: Recognition of affective communicative intent in robot-directed speech. Auton. Robots 12, 83–104 (2002)CrossRefzbMATHGoogle Scholar
  4. 4.
    Castellano, G., Pereira, A., Leite, I., Paiva, A., McOwan, P.W.: Detecting user engagement with a robot companion using task and social interaction-based features. In: 2009 International Conference on Multimodal Interfaces, pp. 119–126 (2009)Google Scholar
  5. 5.
    D’Mello, S., Chipman, P., Graesser, A.: Posture as a predictor of learner’s affective engagement. In: 29th Annual Conference of the Cognitive Science Society, pp. 905–910 (2007)Google Scholar
  6. 6.
    Goetz, T., Frenzel, A.C., Hall, N.C., Nett, U.E., Pekrun, R., Lipnevich, A.A.: Types of boredom: an experience sampling approach. Motiv. Emot. 38(3), 401–419 (2014)CrossRefGoogle Scholar
  7. 7.
    Gratch, J., Wang, N., Okhmatovskaia, A., Lamothe, F., Morales, M., Werf, R.J., Morency, L.-P.: Can virtual humans be more engaging than real ones? In: Jacko, J.A. (ed.) HCI 2007. LNCS, vol. 4552, pp. 286–297. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  8. 8.
    Hudlicka, E.: To feel or not to feel: the role of affect in human-computer interaction. Int. J. Hum. Comput. Stud. 59(1), 1–32 (2003)CrossRefGoogle Scholar
  9. 9.
    Jacko, J.A.: Human Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications. CRC Press, Boca Raton (2012)CrossRefGoogle Scholar
  10. 10.
    Jacobs, A.M., Fransen, B., McCurry, J.M., Heckel, F.W., Wagner, A.R., Trafton, J.G.: A preliminary system for recognizing boredom. In: 4th ACM/IEEE International Conference on Human-Robot Interaction, pp. 299–300 (2009)Google Scholar
  11. 11.
    Kennedy, L.S., Ellis, D.P.: Pitch-based emphasis detection for characterization of meeting recordings. In: 2003 IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 243–248 (2003)Google Scholar
  12. 12.
    Liu, Y., Chee, Y.S.: Intelligent pedagogical agents with multiparty interaction support. In: IEEE/WIC/ACM International Conference on Intelligent Agent Technology, pp. 134–140 (2004)Google Scholar
  13. 13.
    Mehrabian, A.: Silent messages. Wadsworth, Belmont (1971)Google Scholar
  14. 14.
    Mota, S., Picard, R.W.: Automated posture analysis for detecting learner’s interest level. In: Computer Vision and Pattern Recognition Workshop, pp. 49–49 (2003)Google Scholar
  15. 15.
    Pantic, M., Caridakis, G., André, E., Kim, J., Karpouzis, K., Kollias, S.: Multimodal emotion recognition from low-level cues. In: Petta, P., Pelachaud, C., Cowie, R. (eds.) Emotion-Oriented Systems, pp. 115–132. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  16. 16.
    Qvarfordt, P., Beymer, D., Zhai, S.: RealTourist – a study of augmenting human-human and human-computer dialogue with eye-gaze overlay. In: Costabile, M.F., Paternò, F. (eds.) INTERACT 2005. LNCS, vol. 3585, pp. 767–780. Springer, Heidelberg (2005). doi: 10.1007/11555261_61 CrossRefGoogle Scholar
  17. 17.
    Sugiyama, T., Funakoshi, K., Nakano, M., Komatani, K.: Estimating response obligation in multi-party human-robot dialogues. In: 2015 IEEE-RAS 15th International Conference on Humanoid Robots, pp. 166–172 (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Yasuhiro Shibasaki
    • 1
  • Kotaro Funakoshi
    • 2
    Email author
  • Koichi Shinoda
    • 1
  1. 1.Tokyo Institute of TechnologyMeguroJapan
  2. 2.Honda Research Institute Japan Co., Ltd.WakoJapan

Personalised recommendations