Advertisement

A Meeting Log Structuring System Using Wearable Sensors

  • Ayumi Ohnishi
  • Kazuya Murao
  • Tsutomu Terada
  • Masahiko Tsukamoto
Conference paper
Part of the Lecture Notes on Data Engineering and Communications Technologies book series (LNDECT, volume 22)

Abstract

We propose a system that structures a meeting log by detecting and tagging the participants’ actions in the meeting using acceleration sensors. The proposed system detects head movement such as nodding of each participant or motion during utterances by using acceleration sensors attached to the heads of all participants in a meeting. In addition, we developed a Meeting Review Tree, which is an application that recognizes a meeting participants’ utterances and three kinds of actions using acceleration and angular velocity sensors and tags them to recorded movies. In the proposed system, the structure of the meeting is hierarchized into three layers and tagged contexts as follows: The first layer represents the transition of the reporter during the meeting, the second layer represents changes in information of speakers in the report, and the third layer represents motions such as nodding. As a result of the evaluation experiment, the recognition accuracy of the stratified first layer was 57.0% and that of the second layer was 61.0%.

Notes

Acknowledgements

This research was supported in part by a grant in aid for Precursory Research for Embryonic Science and Technology (PRESTO) from the Japan Science and by a grant in aid for Scientific Research (18H01059) from the Ministry of Education, Culture, Sports, Science and Technology.

References

  1. 1.
  2. 2.
  3. 3.
    Morency, L.P., de Kok, I., Gratch, J.: Context-based recognition during human interactions: automatic feature selection and encoding dictionary. In: Proceedings of the 10th International Conference on Multimodal Interfaces (ICMI 2008), pp. 181–188, October 2008Google Scholar
  4. 4.
    Wohler, N., Grosekathofer, U., Dierker, A., Hanheide, M., Kopp, S., Hermann, T.: A calibration-free head gesture recognition system with online capability. In: Proceedings of the 20th International Conference on Pattern Recognition (ICPR 2010), pp. 3814–3817, August 2010Google Scholar
  5. 5.
    Kawahara, T.: Multi-modal sensing and analysis of poster conversations toward smart posterboard. In: Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 1–9 (2012)Google Scholar
  6. 6.
    Sumi, Y., Yano, M., Nishida, T. Analysis environment of conversational structure with nonverbal multimodal data. In: Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI 2010), p. 44 (2010)Google Scholar
  7. 7.
    Kumano, S., Otsuka, K., Mikami, D., Junji, Y.: Recognizing communicative facial expressions for discovering interpersonal emotions in group meetings. In: Proceedings of the 11th International Conference on Multimodal Interfaces (ICMI-MLMI 2009), pp. 99–106, September 2009Google Scholar
  8. 8.
    Otsuka, K., Yamato, J., Takemae, Y., Murase, H.: quantifying interpersonal influence in face-to-face conversations based on visual attention patterns. In: Proceedings of the 4th ACM Conference on Human Factors in Computing Systems (CHI 2006), pp. 1175–1180, April 2006Google Scholar
  9. 9.
    Wireless Technologies Inc. http://www.wireless-t.jp/
  10. 10.

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Ayumi Ohnishi
    • 1
  • Kazuya Murao
    • 2
  • Tsutomu Terada
    • 1
    • 3
  • Masahiko Tsukamoto
    • 1
  1. 1.Graduate School of EngineeringKobe UniversityKobeJapan
  2. 2.College of Information Science and EngineeringRitsumeikan UniversityKusatsuJapan
  3. 3.PREST, JSTTokyoJapan

Personalised recommendations